Science.gov

Sample records for 3d gaze estimation

  1. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces.

    PubMed

    Abbott, W W; Faisal, A A

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s(-1), more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark--the control of the video arcade game 'Pong'.

  2. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  3. Estimating the gaze of a virtuality human.

    PubMed

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV. PMID:23428453

  4. Parameters of the human 3D gaze while observing portable autostereoscopic display: a model and measurement results

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hanhela, Marianne; Gotchev, Atanas; Utirainen, Timo; Jumisko-Pyykkö, Satu; Hannuksela, Miska

    2012-02-01

    We present an approach to measure and model the parameters of human point-of-gaze (PoG) in 3D space. Our model considers the following three parameters: position of the gaze in 3D space, volume encompassed by the gaze and time for the gaze to arrive on the desired target. Extracting the 3D gaze position from binocular gaze data is hindered by three problems. The first problem is the lack of convergence - due to micro saccadic movements the optical lines of both eyes rarely intersect at a point in space. The second problem is resolution - the combination of short observation distance and limited comfort disparity zone typical for a mobile 3D display does not allow the depth of the gaze position to be reliably extracted. The third problem is measurement noise - due to the limited display size, the noise range is close to the range of properly measured data. We have developed a methodology which allows us to suppress most of the measurement noise. This allows us to estimate the typical time which is needed for the point-of-gaze to travel in x, y or z direction. We identify three temporal properties of the binocular PoG. The first is reaction time, which is the minimum time that the vision reacts to a stimulus position change, and is measured as the time between the event and the time the PoG leaves the proximity of the old stimulus position. The second is the travel time of the PoG between the old and new stimulus position. The third is the time-to-arrive, which is the time combining the reaction time, travel time, and the time required for the PoG to settle in the new position. We present the method for filtering the PoG outliers, for deriving the PoG center from binocular eye-tracking data and for calculating the gaze volume as a function of the distance between PoG and the observer. As an outcome from our experiments we present binocular heat maps aggregated over all observers who participated in a viewing test. We also show the mean values for all temporal

  5. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. PMID:25982719

  6. A kinematic model for 3-D head-free gaze-shifts

    PubMed Central

    Daemi, Mehdi; Crawford, J. Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  7. Quality control of 3D Geological Models using an Attention Model based on Gaze

    NASA Astrophysics Data System (ADS)

    Busschers, Freek S.; van Maanen, Peter-Paul; Brouwer, Anne-Marie

    2014-05-01

    The Geological Survey of the Netherlands (GSN) produces 3D stochastic geological models of the upper 50 meters of the Dutch subsurface. The voxel models are regarded essential in answering subsurface questions on, for example, aggregate resources, groundwater flow, land subsidence studies and the planning of large-scale infrastructural works such as tunnels. GeoTOP is the most recent and detailed generation of 3D voxel models. This model describes 3D lithological variability up to a depth of 50 m using voxels of 100*100*0.5m. Due to the expected increase in data-flow, model output and user demands, the development of (semi-)automated quality control systems is getting more important in the near future. Besides numerical control systems, capturing model errors as seen from the expert geologist viewpoint is of increasing interest. We envision the use of eye gaze to support and speed up detection of errors in the geological voxel models. As a first step in this direction we explore gaze behavior of 12 geological experts from the GSN during quality control of part of the GeoTOP 3D geological model using an eye-tracker. Gaze is used as input of an attention model that results in 'attended areas' for each individual examined image of the GeoTOP model and each individual expert. We compared these attended areas to errors as marked by the experts using a mouse. Results show that: 1) attended areas as determined from experts' gaze data largely match with GeoTOP errors as indicated by the experts using a mouse, and 2) a substantial part of the match can be reached using only gaze data from the first few seconds of the time geologists spend to search for errors. These results open up the possibility of faster GeoTOP model control using gaze if geologists accept a small decrease of error detection accuracy. Attention data may also be used to make independent comparisons between different geologists varying in focus and expertise. This would facilitate a more effective use of

  8. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery. PMID:20426007

  9. i-BRUSH: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery.

    PubMed

    Visentini-Scarzanella, Marco; Mylonas, George P; Stoyanov, Danail; Yang, Guang-Zhong

    2009-01-01

    With increasing demand on intra-operative navigation and motion compensation during robotic assisted minimally invasive surgery, real-time 3D deformation recovery remains a central problem. Currently the majority of existing methods rely on salient features, where the inherent paucity of distinctive landmarks implies either a semi-dense reconstruction or the use of strong geometrical constraints. In this study, we propose a gaze-contingent depth reconstruction scheme by integrating human perception with semi-dense stereo and p-q based shading information. Depth inference is carried out in real-time through a novel application of Bayesian chains without smoothness priors. The practical value of the scheme is highlighted by detailed validation using a beating heart phantom model with known geometry to verify the performance of gaze-contingent 3D surface reconstruction and deformation recovery.

  10. Eye gaze estimation from the elliptical features of one iris

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Zhang, Tai-Ning; Chang, Sheng-Jiang

    2011-04-01

    The accuracy of eye gaze estimation using image information is affected by several factors which include image resolution, anatomical structure of the eye, and posture changes. The irregular movements of the head and eye create issues that are currently being researched to enable better use of this key technology. In this paper, we describe an effective way of estimating eye gaze from the elliptical features of one iris under the conditions of not using an auxiliary light source, a head fixing equipment, or multiple cameras. First, we provide preliminary estimation of the gaze direction, and then we obtain the vectors which describe the translation and rotation of the eyeball, by applying a central projection method on the plane which passes through the line-of-sight. This helps us avoid the complex computations involved in previous methods. We also disambiguate the solution based on experimental findings. Second, error correction is conducted on a back propagation neural network trained by a sample collection of translation and rotation vectors. Extensive experimental studies are conducted to assess the efficiency, and robustness of our method. Results reveal that our method has a better performance compared to a typical previous method.

  11. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  12. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  13. Neuromorphic Event-Based 3D Pose Estimation.

    PubMed

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B

    2015-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30-60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion.

  14. Neuromorphic Event-Based 3D Pose Estimation.

    PubMed

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B

    2015-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30-60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  15. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  16. Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System.

    PubMed

    Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is a camera-vision based technology for identifying the location where a user is looking. In general, a calibration process is applied at the initial stage of most gaze tracking systems. This process is necessary to calibrate for the differences in the eyeballs and cornea size of the user, as well as the angle kappa, and to find the relationship between the user's eye and screen coordinates. It is applied on the basis of the information of the user's pupil and corneal specular reflection obtained while the user is looking at several predetermined positions on a screen. In previous studies, user calibration was performed using various types of markers and marker display methods. However, studies on estimating the accuracy of gaze detection through the results obtained during the calibration process have yet to be carried out. Therefore, we propose the method for estimating the accuracy of a final gaze tracking system with a near-infrared (NIR) camera by using a fuzzy system based on the user calibration information. Here, the accuracy of the final gaze tracking system ensures the gaze detection accuracy during the testing stage of the gaze tracking system. Experiments were performed using a total of four types of markers and three types of marker display methods. From them, it was found that the proposed method correctly estimated the accuracy of the gaze tracking regardless of the various marker and marker display types applied. PMID:26742045

  17. Estimation of Gaze Detection Accuracy Using the Calibration Information-Based Fuzzy System

    PubMed Central

    Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is a camera-vision based technology for identifying the location where a user is looking. In general, a calibration process is applied at the initial stage of most gaze tracking systems. This process is necessary to calibrate for the differences in the eyeballs and cornea size of the user, as well as the angle kappa, and to find the relationship between the user’s eye and screen coordinates. It is applied on the basis of the information of the user’s pupil and corneal specular reflection obtained while the user is looking at several predetermined positions on a screen. In previous studies, user calibration was performed using various types of markers and marker display methods. However, studies on estimating the accuracy of gaze detection through the results obtained during the calibration process have yet to be carried out. Therefore, we propose the method for estimating the accuracy of a final gaze tracking system with a near-infrared (NIR) camera by using a fuzzy system based on the user calibration information. Here, the accuracy of the final gaze tracking system ensures the gaze detection accuracy during the testing stage of the gaze tracking system. Experiments were performed using a total of four types of markers and three types of marker display methods. From them, it was found that the proposed method correctly estimated the accuracy of the gaze tracking regardless of the various marker and marker display types applied. PMID:26742045

  18. Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes.

    PubMed

    Parks, Daniel; Borji, Ali; Itti, Laurent

    2015-11-01

    Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor's head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor's head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers' fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.

  19. 3D magnetic sources' framework estimation using Genetic Algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Ponte-Neto, C. F.; Barbosa, V. C.

    2008-05-01

    We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate

  20. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  1. A new gaze estimation method considering external light.

    PubMed

    Lee, Jong Man; Lee, Hyeon Chang; Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Cho, Chul Woo; Park, Kang Ryoung; Kim, Hyun-Cheol; Cha, Jihun

    2015-01-01

    Gaze tracking systems usually utilize near-infrared (NIR) lights and NIR cameras, and the performance of such systems is mainly affected by external light sources that include NIR components. This is ascribed to the production of additional (imposter) corneal specular reflection (SR) caused by the external light, which makes it difficult to discriminate between the correct SR as caused by the NIR illuminator of the gaze tracking system and the imposter SR. To overcome this problem, a new method is proposed for determining the correct SR in the presence of external light based on the relationship between the corneal SR and the pupil movable area with the relative position of the pupil and the corneal SR. The experimental results showed that the proposed method makes the gaze tracking system robust to the existence of external light. PMID:25769050

  2. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  3. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  4. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  5. Gaze estimation for off-angle iris recognition based on the biometric eye model

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Thompson, Joseph; Bolme, David; Boehnen, Christopher

    2013-05-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  6. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Thompson, Joseph W; Bolme, David S; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  7. Full 3-D transverse oscillations: a method for tissue motion estimation.

    PubMed

    Salles, Sebastien; Liebgott, Hervé; Garcia, Damien; Vray, Didier

    2015-08-01

    We present a new method to estimate 4-D (3-D + time) tissue motion. The method used combines 3-D phase based motion estimation with an unconventional beamforming strategy. The beamforming technique allows us to obtain full 3-D RF volumes with axial, lateral, and elevation modulations. Based on these images, we propose a method to estimate 3-D motion that uses phase images instead of amplitude images. First, volumes featuring 3-D oscillations are created using only a single apodization function, and the 3-D displacement between two consecutive volumes is estimated simultaneously by applying this 3-D estimation. The validity of the method is investigated by conducting simulations and phantom experiments. The results are compared with those obtained with two other conventional estimation methods: block matching and optical flow. The results show that the proposed method outperforms the conventional methods, especially in the transverse directions.

  8. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  9. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    SciTech Connect

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysis sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.

  10. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  11. Boundary estimation method for ultrasonic 3D imaging

    NASA Astrophysics Data System (ADS)

    Ohashi, Gosuke; Ohya, Akihisa; Natori, Michiya; Nakajima, Masato

    1993-09-01

    The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.

  12. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  13. Interferometric synthetic aperture radar detection and estimation based 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2006-05-01

    This paper explores three-dimensional (3D) interferometric synthetic aperture radar (IFSAR) image reconstruction when multiple scattering centers and noise are present in a radar resolution cell. We introduce an IFSAR scattering model that accounts for both multiple scattering centers and noise. The problem of 3D image reconstruction is then posed as a multiple hypothesis detection and estimation problem; resolution cells containing a single scattering center are detected and the 3D location of these cells' pixels are estimated; all other pixels are rejected from the image. Detection and estimation statistics are derived using the multiple scattering center IFSAR model. A 3D image reconstruction algorithm using these statistics is then presented, and its performance is evaluated for a 3D reconstruction of a backhoe from noisy IFSAR data.

  14. Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis.

    PubMed

    Lu, Feng; Sugano, Yusuke; Okabe, Takahiro; Sato, Yoichi

    2015-11-01

    In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness. PMID:26080385

  15. Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis.

    PubMed

    Lu, Feng; Sugano, Yusuke; Okabe, Takahiro; Sato, Yoichi

    2015-11-01

    In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness.

  16. A low-computational approach on gaze estimation with eye touch system.

    PubMed

    Topal, Cihan; Gunal, Serkan; Koçdeviren, Onur; Doğan, Atakan; Gerek, Ömer Nezih

    2014-02-01

    Among various approaches to eye tracking systems, light-reflection based systems with non-imaging sensors, e.g., photodiodes or phototransistors, are known to have relatively low complexity; yet, they provide moderately accurate estimation of the point of gaze. In this paper, a low-computational approach on gaze estimation is proposed using the Eye Touch system, which is a light-reflection based eye tracking system, previously introduced by the authors. Based on the physical implementation of Eye Touch, the sensor measurements are now utilized in low-computational least-squares algorithms to estimate arbitrary gaze directions, unlike the existing light reflection-based systems, including the initial Eye Touch implementation, where only limited predefined regions were distinguished. The system also utilizes an effective pattern classification algorithm to be able to perform left, right, and double clicks based on respective eye winks with significantly high accuracy. In order to avoid accuracy problems for sensitive sensor biasing hardware, a robust custom microcontroller-based data acquisition system is developed. Consequently, the physical size and cost of the overall Eye Touch system are considerably reduced while the power efficiency is improved. The results of the experimental analysis over numerous subjects clearly indicate that the proposed eye tracking system can classify eye winks with 98% accuracy, and attain an accurate gaze direction with an average angular error of about 0.93 °. Due to its lightweight structure, competitive accuracy and low-computational requirements relative to video-based eye tracking systems, the proposed system is a promising human-computer interface for both stationary and mobile eye tracking applications.

  17. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods.

    PubMed

    van Velden, Floris H P; Kloet, Reina W; van Berckel, Bart N M; Wolfensberger, Saskia P A; Lammertsma, Adriaan A; Boellaard, Ronald

    2008-06-21

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  18. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods

    NASA Astrophysics Data System (ADS)

    van Velden, Floris H. P.; Kloet, Reina W.; van Berckel, Bart N. M.; Wolfensberger, Saskia P. A.; Lammertsma, Adriaan A.; Boellaard, Ronald

    2008-06-01

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  19. Estimation of the degree of polarization in low-light 3D integral imaging

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2016-06-01

    The calculation of the Stokes Parameters and the Degree of Polarization in 3D integral images requires a careful manipulation of the polarimetric elemental images. This fact is particularly important if the scenes are taken in low-light conditions. In this paper, we show that the Degree of Polarization can be effectively estimated even when elemental images are recorded with few photons. The original idea was communicated in [A. Carnicer and B. Javidi, "Polarimetric 3D integral imaging in photon-starved conditions," Opt. Express 23, 6408-6417 (2015)]. First, we use the Maximum Likelihood Estimation approach for generating the 3D integral image. Nevertheless, this method produces very noisy images and thus, the degree of polarization cannot be calculated. We suggest using a Total Variation Denoising filter as a way to improve the quality of the generated 3D images. As a result, noise is suppressed but high frequency information is preserved. Finally, the degree of polarization is obtained successfully.

  20. Nonwearable Gaze Tracking System for Controlling Home Appliance

    PubMed Central

    Jung, Dongwook

    2014-01-01

    A novel gaze tracking system for controlling home appliances in 3D space is proposed in this study. Our research is novel in the following four ways. First, we propose a nonwearable gaze tracking system containing frontal viewing and eye tracking cameras. Second, our system includes three modes: navigation (for moving the wheelchair depending on the direction of gaze movement), selection (for selecting a specific appliance by gaze estimation), and manipulation (for controlling the selected appliance by gazing at the control panel). The modes can be changed by closing eyes during a specific time period or gazing. Third, in the navigation mode, the signal for moving the wheelchair can be triggered according to the direction of gaze movement. Fourth, after a specific home appliance is selected by gazing at it for more than predetermined time period, a control panel with 3 × 2 menu is displayed on laptop computer below the gaze tracking system for manipulation. The user gazes at one of the menu options for a specific time period, which can be manually adjusted according to the user, and the signal for controlling the home appliance can be triggered. The proposed method is shown to have high detection accuracy through a series of experiments. PMID:25298966

  1. Nonwearable gaze tracking system for controlling home appliance.

    PubMed

    Heo, Hwan; Lee, Jong Man; Jung, Dongwook; Lee, Ji Woo; Park, Kang Ryoung

    2014-01-01

    A novel gaze tracking system for controlling home appliances in 3D space is proposed in this study. Our research is novel in the following four ways. First, we propose a nonwearable gaze tracking system containing frontal viewing and eye tracking cameras. Second, our system includes three modes: navigation (for moving the wheelchair depending on the direction of gaze movement), selection (for selecting a specific appliance by gaze estimation), and manipulation (for controlling the selected appliance by gazing at the control panel). The modes can be changed by closing eyes during a specific time period or gazing. Third, in the navigation mode, the signal for moving the wheelchair can be triggered according to the direction of gaze movement. Fourth, after a specific home appliance is selected by gazing at it for more than predetermined time period, a control panel with 3 × 2 menu is displayed on laptop computer below the gaze tracking system for manipulation. The user gazes at one of the menu options for a specific time period, which can be manually adjusted according to the user, and the signal for controlling the home appliance can be triggered. The proposed method is shown to have high detection accuracy through a series of experiments. PMID:25298966

  2. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  3. Field testing of a 3D automatic target recognition and pose estimation algorithm

    NASA Astrophysics Data System (ADS)

    Ruel, Stephane; English, Chad E.; Melo, Len; Berube, Andrew; Aikman, Doug; Deslauriers, Adam M.; Church, Philip M.; Maheux, Jean

    2004-09-01

    Neptec Design Group Ltd. has developed a 3D Automatic Target Recognition (ATR) and pose estimation technology demonstrator in partnership with the Canadian DND. The system prototype was deployed for field testing at Defence Research and Development Canada (DRDC)-Valcartier. This paper discusses the performance of the developed algorithm using 3D scans acquired with an imaging LIDAR. 3D models of civilian and military vehicles were built using scans acquired with a triangulation laser scanner. The models were then used to generate a knowledge base for the recognition algorithm. A commercial imaging LIDAR was used to acquire test scans of the target vehicles with varying range, pose and degree of occlusion. Recognition and pose estimation results are presented for at least 4 different poses of each vehicle at each test range. Results obtained with targets partially occluded by an artificial plane, vegetation and military camouflage netting are also presented. Finally, future operational considerations are discussed.

  4. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  5. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  6. DTI template-based estimation of cardiac fiber orientations from 3D ultrasound

    PubMed Central

    Qin, Xulei; Fei, Baowei

    2015-01-01

    Purpose: Cardiac muscle fibers directly affect the mechanical, physiological, and pathological properties of the heart. Patient-specific quantification of cardiac fiber orientations is an important but difficult problem in cardiac imaging research. In this study, the authors proposed a cardiac fiber orientation estimation method based on three-dimensional (3D) ultrasound images and a cardiac fiber template that was obtained from magnetic resonance diffusion tensor imaging (DTI). Methods: A DTI template-based framework was developed to estimate cardiac fiber orientations from 3D ultrasound images using an animal model. It estimated the cardiac fiber orientations of the target heart by deforming the fiber orientations of the template heart, based on the deformation field of the registration between the ultrasound geometry of the target heart and the MRI geometry of the template heart. In the experiments, the animal hearts were imaged by high-frequency ultrasound, T1-weighted MRI, and high-resolution DTI. Results: The proposed method was evaluated by four different parameters: Dice similarity coefficient (DSC), target errors, acute angle error (AAE), and inclination angle error (IAE). Its ability of estimating cardiac fiber orientations was first validated by a public database. Then, the performance of the proposed method on 3D ultrasound data was evaluated by an acquired database. Their average values were 95.4% ± 2.0% for the DSC of geometric registrations, 21.0° ± 0.76° for AAE, and 19.4° ± 1.2° for IAE of fiber orientation estimations. Furthermore, the feasibility of this framework was also performed on 3D ultrasound images of a beating heart. Conclusions: The proposed framework demonstrated the feasibility of using 3D ultrasound imaging to estimate cardiac fiber orientation of in vivo beating hearts and its further improvements could contribute to understanding the dynamic mechanism of the beating heart and has the potential to help diagnosis and therapy

  7. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  8. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations.

  9. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  10. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  11. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  12. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models.

    PubMed

    Dhou, S; Hurwitz, M; Mishra, P; Cai, W; Rottmann, J; Li, R; Williams, C; Wagar, M; Berbeco, R; Ionascu, D; Lewis, J H

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  13. Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas; Zhang, Geoffrey; Huang, Tzung-Chi; Lin, Kang-Ping

    2004-09-01

    The purpose of this work was to develop and validate an automated method for intrathoracic tumour motion estimation from breath-hold computed tomography (BH CT) imaging using the three-dimensional optical flow method (3D OFM). A modified 3D OFM algorithm provided 3D displacement vectors for each voxel which were used to map tumour voxels on expiration BH CT onto inspiration BH CT images. A thoracic phantom and simulated expiration/inspiration BH CT pairs were used for validation. The 3D OFM was applied to the measured inspiration and expiration BH CT images from one lung cancer and one oesophageal cancer patient. The resulting displacements were plotted in histogram format and analysed to provide insight regarding the tumour motion. The phantom tumour displacement was measured as 1.20 and 2.40 cm with full-width at tenth maximum (FWTM) for the distribution of displacement estimates of 0.008 and 0.006 cm, respectively. The maximum error of any single voxel's motion estimate was 1.1 mm along the z-dimension or approximately one-third of the z-dimension voxel size. The simulated BH CT pairs revealed an rms error of less than 0.25 mm. The displacement of the oesophageal tumours was nonuniform and up to 1.4 cm, this was a new finding. A lung tumour maximum displacement of 2.4 cm was found in the case evaluated. In conclusion, 3D OFM provided an accurate estimation of intrathoracic tumour motion, with estimated errors less than the voxel dimension in a simulated motion phantom study. Surprisingly, oesophageal tumour motion was large and nonuniform, with greatest motion occurring at the gastro-oesophageal junction. Presented at The IASTED Second International Conference on Biomedical Engineering (BioMED 2004), Innsbruck, Austria, 16-18 February 2004.

  14. Estimating 3D tilt from local image cues in natural scenes

    PubMed Central

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702

  15. 3D position estimation using an artificial neural network for a continuous scintillator PET detector

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhu, W.; Cheng, X.; Li, D.

    2013-03-01

    Continuous crystal based PET detectors have features of simple design, low cost, good energy resolution and high detection efficiency. Through single-end readout of scintillation light, direct three-dimensional (3D) position estimation could be another advantage that the continuous crystal detector would have. In this paper, we propose to use artificial neural networks to simultaneously estimate the plane coordinate and DOI coordinate of incident γ photons with detected scintillation light. Using our experimental setup with an ‘8 + 8’ simplified signal readout scheme, the training data of perpendicular irradiation on the front surface and one side surface are obtained, and the plane (x, y) networks and DOI networks are trained and evaluated. The test results show that the artificial neural network for DOI estimation is as effective as for plane estimation. The performance of both estimators is presented by resolution and bias. Without bias correction, the resolution of the plane estimator is on average better than 2 mm and that of the DOI estimator is about 2 mm over the whole area of the detector. With bias correction, the resolution at the edge area for plane estimation or at the end of the block away from the readout PMT for DOI estimation becomes worse, as we expect. The comprehensive performance of the 3D positioning by a neural network is accessed by the experimental test data of oblique irradiations. To show the combined effect of the 3D positioning over the whole area of the detector, the 2D flood images of oblique irradiation are presented with and without bias correction.

  16. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm. PMID:26930684

  17. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  18. Estimation of line dimensions in 3D direct laser writing lithography

    NASA Astrophysics Data System (ADS)

    Guney, M. G.; Fedder, G. K.

    2016-10-01

    Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.

  19. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  20. Strain estimation in 3D by fitting linear and planar data to the March model

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Talbot, Christopher J.

    2016-08-01

    The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.

  1. Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.

    PubMed

    Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L

    2015-03-01

    Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.

  2. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  3. Probabilistic 3D object recognition and pose estimation using multiple interpretations generation.

    PubMed

    Lu, Zhaojin; Lee, Sukhan

    2011-12-01

    This paper presents a probabilistic object recognition and pose estimation method using multiple interpretation generation in cluttered indoor environments. How to handle pose ambiguity and uncertainty is the main challenge in most recognition systems. In order to solve this problem, we approach it in a probabilistic manner. First, given a three-dimensional (3D) polyhedral object model, the parallel and perpendicular line pairs, which are detected from stereo images and 3D point clouds, generate pose hypotheses as multiple interpretations, with ambiguity from partial occlusion and fragmentation of 3D lines especially taken into account. Different from the previous methods, each pose interpretation is represented as a region instead of a point in pose space reflecting the measurement uncertainty. Then, for each pose interpretation, more features around the estimated pose are further utilized as additional evidence for computing the probability using the Bayesian principle in terms of likelihood and unlikelihood. Finally, fusion strategy is applied to the top ranked interpretations with high probabilities, which are further verified and refined to give a more accurate pose estimation in real time. The experimental results show the performance and potential of the proposed approach in real cluttered domestic environments.

  4. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  5. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  6. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  7. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  8. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  9. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  10. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  11. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks.

  12. Parametric estimation of 3D tubular structures for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Anderson, Pamela G.; Rosenberg, Elizabeth; Kilmer, Misha E.; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L.

    2013-01-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction. PMID:23411913

  13. 3D Porosity Estimation of the Nankai Trough Sediments from Core-log-seismic Integration

    NASA Astrophysics Data System (ADS)

    Park, J. O.

    2015-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake fault. Historic, great megathrust earthquakes with a recurrence interval of 100-200 yr have generated strong motion and large tsunamis along the Nankai Trough subduction zone. At the Nankai Trough margin, the Philippine Sea Plate (PSP) is being subducted beneath the Eurasian Plate to the northwest at a convergence rate ~4 cm/yr. The Shikoku Basin, the northern part of the PSP, is estimated to have opened between 25 and 15 Ma by backarc spreading of the Izu-Bonin arc. The >100-km-wide Nankai accretionary wedge, which has developed landward of the trench since the Miocene, mainly consists of offscraped and underplated materials from the trough-fill turbidites and the Shikoku Basin hemipelagic sediments. Particularly, physical properties of the incoming hemipelagic sediments may be critical for seismogenic behavior of the megathrust fault. We have carried out core-log-seismic integration (CLSI) to estimate 3D acoustic impedance and porosity for the incoming sediments in the Nankai Trough. For the CLSI, we used 3D seismic reflection data, P-wave velocity and density data obtained during IODP (Integrated Ocean Drilling Program) Expeditions 322 and 333. We computed acoustic impedance depth profiles for the IODP drilling sites from P-wave velocity and density data. We constructed seismic convolution models with the acoustic impedance profiles and a source wavelet which is extracted from the seismic data, adjusting the seismic models to observed seismic traces with inversion method. As a result, we obtained 3D acoustic impedance volume and then converted it to 3D porosity volume. In general, the 3D porosities show decrease with depth. We found a porosity anomaly zone with alteration of high and low porosities seaward of the trough axis. In this talk, we will show detailed 3D porosity of the incoming sediments, and present implications of the porosity anomaly zone for the

  14. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  15. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments.

    PubMed

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system's capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  16. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  17. Super-resolved position and orientation estimation of fluorescent dipoles using 3-D steerable filters

    NASA Astrophysics Data System (ADS)

    Geissbuehler, S.; Aguet, F.; Maerki, I.; Lasser, T.

    2010-02-01

    The diffraction patterns of fixed fluorophores are characteristic of the orientation of the molecules' underlying dipole. Fluorescence localization microscopy techniques such as PALM and STORM achieve super-resolution by sequentially imaging sparse subsets of fluorophores, which are localized by means of Gaussian-based localization. This approach is based on the assumption of isotropic emitters, where the diffraction pattern corresponds to a section of the point spread function. Applied to fixed fluorophores, it can lead to an estimation bias in the range of 5-20nm. We introduce a method for the joint estimation of position and orientation of single fluorophores, based on an accurate image formation model expressed as a 3-D steerable filter. We demonstrate experimental estimation accuracies of 5 nm for position and 2 degrees for orientation.

  18. First order error propagation of the procrustes method for 3D attitude estimation.

    PubMed

    Dorst, Leo

    2005-02-01

    The well-known Procrustes method determines the optimal rigid body motion that registers two point clouds by minimizing the square distances of the residuals. In this paper, we perform the first order error analysis of this method for the 3D case, fully specifying how directional noise in the point clouds affects the estimated parameters of the rigid body motion. These results are much more specific than the error bounds which have been established in numerical analysis. We provide an intuitive understanding of the outcome to facilitate direct use in applications.

  19. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  20. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  1. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  2. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy.

    PubMed

    Stemkens, Bjorn; Tijssen, Rob H N; de Senneville, Baudouin Denis; Lagendijk, Jan J W; van den Berg, Cornelis A T

    2016-07-21

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  3. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  4. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  5. Estimating the actual dose delivered by intravascular coronary brachytherapy using geometrically correct 3D modeling

    NASA Astrophysics Data System (ADS)

    Wahle, Andreas; Lopez, John J.; Pennington, Edward C.; Meeks, Sanford L.; Braddy, Kathleen C.; Fox, James M.; Brennan, Theresa M. H.; Buatti, John M.; Rossen, James D.; Sonka, Milan

    2003-05-01

    Intravascular brachytherapy has shown to reduce re-occurrence of in-stent restenosis in coronary arteries. For beta radiation, application time is determined from source activity and the angiographically estimated vessel diameter. Conventionally used dosing models assume a straight vessel with the catheter centered and a constant-diameter circular cross section. Aim of this study was to compare the actual dose delivered during in-vivo intravascular brachytherapy with the target range determined from the patient's prescribed dose. Furthermore, differences in dose distribution between a simplified tubular model (STM) and a geometrically correct 3-D model (GCM) obtained from fusion between biplane angiography and intravascular ultrasound were quantified. The tissue enclosed by the segmented lumen/plaque and media/adventitia borders was simulated using a structured finite-element mesh. The beta-radiation sources were modeled as 3-D objects in their angiographically determined locations. The accumulated dose was estimated using a fixed distance function based on the patient-specific radiation parameters. For visualization, the data was converted to VRML with the accumulated doses represented by color encoding. The statistical comparison between STM and GCM models in 8 patients showed that the STM significantly underestimates the dose delivered and its variability. The analysis revealed substantial deviations from the target dose range in curved vessels.

  6. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  7. 3D global estimation and augmented reality visualization of intra-operative X-ray dose.

    PubMed

    Rodas, Nicolas Loy; Padoy, Nicolas

    2014-01-01

    The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff with new radiation exposure risks from X-ray imaging devices. The accurate estimation of intra-operative radiation exposure can increase staff awareness of radiation exposure risks and enable the implementation of well-adapted safety measures. The current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. In this paper, we propose an approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to provide a global radiation risk map in the area near the X-ray device. We use a multi-camera RGBD system to obtain a 3D point cloud reconstruction of the room. The positions of the table, C-arm and clinician are then used 1) to simulate the propagation of radiation in a real-world setup and 2) to overlay the resulting 3D risk-map onto the scene in an augmented reality manner. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time. We demonstrate our system in an operating room equipped with a robotised X-ray imaging device and validate the radiation simulation on several X-ray acquisition setups. PMID:25333145

  8. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  9. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  10. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  11. Estimation of 3D myocardial motion from tagged MRI using LDDMM

    NASA Astrophysics Data System (ADS)

    Kotamraju, Vinay; McVeigh, Elliot; Beg, Mirza Faisal

    2007-03-01

    Non-invasive estimation of regional cardiac function is important for assessment of myocardial contractility. The use of MR tagging technique enables acquisition of intra-myocardial tissue motion by placing a spatially modulated pattern of magnetization whose deformation with the myocardium over the cardiac cycle can be imaged. Quantitative computation of parameters such as wall thickening, shearing, rotation, torsion and strain within the myocardium is traditionally achieved by processing the tag-marked MR image frames to 1) segment the tag lines and 2) detect the correspondence between points across the time-indexed frames. In this paper, we describe our approach to solving this problem using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) algorithm in which tag-line segmentation and motion reconstruction occur simultaneously. Our method differs from earlier proposed non rigid registration based cardiac motion estimation methods in that our matching cost incorporates image intensity overlap via the L2 norm and the estimated tranformations are diffeomorphic. We also present a novel method of generating synthetic tag line images with known ground truth and motion characteristics that closely follow those in the original data; these can be used for validation of motion estimation algorithms. Initial validation shows that our method is able to accurately segment tag-lines and estimate a dense 3D motion field describing the motion of the myocardium in both the left and the right ventricle.

  12. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  13. On-line 3D motion estimation using low resolution MRI.

    PubMed

    Glitzner, M; de Senneville, B Denis; Lagendijk, J J W; Raaymakers, B W; Crijns, S P M

    2015-08-21

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with (2.5 mm)3 voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. (5 mm)3. In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  14. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  15. Automated Segmentation of the Right Ventricle in 3D Echocardiography: A Kalman Filter State Estimation Approach.

    PubMed

    Bersvendsen, Jorn; Orderud, Fredrik; Massey, Richard John; Fosså, Kristian; Gerard, Olivier; Urheim, Stig; Samset, Eigil

    2016-01-01

    As the right ventricle's (RV) role in cardiovascular diseases is being more widely recognized, interest in RV imaging, function and quantification is growing. However, there are currently few RV quantification methods for 3D echocardiography presented in the literature or commercially available. In this paper we propose an automated RV segmentation method for 3D echocardiographic images. We represent the RV geometry by a Doo-Sabin subdivision surface with deformation modes derived from a training set of manual segmentations. The segmentation is then represented as a state estimation problem and solved with an extended Kalman filter by combining the RV geometry with a motion model and edge detection. Validation was performed by comparing surface-surface distances, volumes and ejection fractions in 17 patients with aortic insufficiency between the proposed method, magnetic resonance imaging (MRI), and a manual echocardiographic reference. The algorithm was efficient with a mean computation time of 2.0 s. The mean absolute distances between the proposed and manual segmentations were 3.6 ± 0.7 mm. Good agreements of end diastolic volume, end systolic volume and ejection fraction with respect to MRI ( -26±24 mL , -16±26 mL and 0 ± 10%, respectively) and a manual echocardiographic reference (7 ± 30 mL, 13 ± 17 mL and -5±7% , respectively) were observed.

  16. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  17. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  18. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  19. A Hierarchical Bayesian Approcah for Earthquake Location and Data Uncertainty Estimation in 3D Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Arroucau, P.; Custodio, S.

    2014-12-01

    Solving inverse problems requires an estimate of data uncertainties. This usually takes the form of a data covariance matrix, which determines the shape of the model posterior distribution. Those uncertainties are yet not always known precisely and it is common practice to simply set them to a fixed, reasonable value. In the case of earthquake location, the hypocentral parameters (longitude, latitude, depth and origin time) are typically inverted for using seismic phase arrival times. But quantitative data variance estimates are rarely provided. Instead, arrival time catalogs usually associate phase picks with a quality factor, which is subsequently interpreted more or less arbitrarily in terms of data uncertainty in the location procedure. Here, we present a hierarchical Bayesian algorithm for earthquake location in 3D heterogeneous media, in which not only the earthquake hypocentral parameters, but also the P- and S-wave arrival time uncertainties, are inverted for, hence allowing more realistic posterior model covariance estimates. Forward modeling is achieved by means of the Fast Marching Method (FMM), an eikonal solver which has the ability to take interfaces into account, so direct, reflected and refracted phases can be used in the inversion. We illustrate the ability of our algorithm to retrieve earthquake hypocentral parameters as well as data uncertainties through synthetic examples and using a subset of arrival time catalogs for mainland Portugal and its Atlantic margin.

  20. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.

  1. 3D pore-network analysis and permeability estimation of deformation bands hosted in carbonate grainstones.

    NASA Astrophysics Data System (ADS)

    Zambrano, Miller; Tondi, Emanuele; Mancini, Lucia; Trias, F. Xavier; Arzilli, Fabio; Lanzafame, Gabriele; Aibibula, Nijiati

    2016-04-01

    In porous rocks strain is commonly localized in narrow Deformation Bands (DBs), where the petrophysical properties are significantly modified with respect the pristine rock. As a consequence, DBs could have an important effect on production and development of porous reservoirs representing baffles zones or, in some cases, contribute to reservoir compartmentalization. Taking in consideration that the decrease of permeability within DBs is related to changes in the porous network properties (porosity, connectivity) and the pores morphology (size distribution, specific surface area), an accurate porous network characterization is useful for understanding both the effect of deformation banding on the porous network and their influence upon fluid flow through the deformed rocks. In this work, a 3D characterization of the microstructure and texture of DBs hosted in porous carbonate grainstones was obtained at the Elettra laboratory (Trieste, Italy) by using two different techniques: phase-contrast synchrotron radiation computed microtomography (micro-CT) and microfocus X-ray micro-CT. These techniques are suitable for addressing quantitative analysis of the porous network and implementing Computer Fluid Dynamics (CFD)experiments in porous rocks. Evaluated samples correspond to grainstones highly affected by DBs exposed in San Vito Lo Capo peninsula (Sicily, Italy), Favignana Island (Sicily, Italy) and Majella Mountain (Abruzzo, Italy). For the analysis, the data were segmented in two main components porous and solid phases. The properties of interest are porosity, connectivity, a grain and/or porous textural properties, in order to differentiate host rock and DBs in different zones. Permeability of DB and surrounding host rock were estimated by the implementation of CFD experiments, permeability results are validated by comparing with in situ measurements. In agreement with previous studies, the 3D image analysis and flow simulation indicate that DBs could be constitute

  2. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  3. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  4. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands.

    PubMed

    Biljecki, Filip; Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region's population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  5. 3D Model Uncertainty in Estimating the Inner Edge of the Habitable Zone

    NASA Astrophysics Data System (ADS)

    Abbot, D. S.; Yang, J.; Wolf, E. T.; Leconte, J.; Merlis, T. M.; Koll, D. D. B.; Goldblatt, C.; Ding, F.; Forget, F.; Toon, B.

    2015-12-01

    Accurate estimates of the width of the habitable zone are critical for determining which exoplanets are potentially habitable and estimating the frequency of Earth-like planets in the galaxy. Recently, the inner edge of the habitable zone has been calculated using 3D atmospheric general circulation models (GCMs) that include the effects of subsaturation and clouds, but different models obtain different results. We study potential sources of differences in five GCMs through a series of comparisons of radiative transfer, clouds, and dynamical cores for a rapidly rotating planet around the Sun and a synchronously rotating planet around an M star. We find that: (1) Cloud parameterization leads to the largest differences among the models; (2) Differences in water vapor longwave radiative transfer are moderate as long as the surface temperature is lower than 360 K; (3) Differences in shortwave absorption influences atmospheric humidity of synchronously rotating planet through a positive feedback; (4) Differences in atmospheric dynamical core have a very small effect on the surface temperature; and (5) Rayleigh scattering leads to very small differences among models. These comparisons suggest that future model development should focus on clouds and water vapor radiative transfer.

  6. Estimating 3D movements from 2D observations using a continuous model of helical swimming.

    PubMed

    Gurarie, Eliezer; Grünbaum, Daniel; Nishizaki, Michael T

    2011-06-01

    Helical swimming is among the most common movement behaviors in a wide range of microorganisms, and these movements have direct impacts on distributions, aggregations, encounter rates with prey, and many other fundamental ecological processes. Microscopy and video technology enable the automated acquisition of large amounts of tracking data; however, these data are typically two-dimensional. The difficulty of quantifying the third movement component complicates understanding of the biomechanical causes and ecological consequences of helical swimming. We present a versatile continuous stochastic model-the correlated velocity helical movement (CVHM) model-that characterizes helical swimming with intrinsic randomness and autocorrelation. The model separates an organism's instantaneous velocity into a slowly varying advective component and a perpendicularly oriented rotation, with velocities, magnitude of stochasticity, and autocorrelation scales defined for both components. All but one of the parameters of the 3D model can be estimated directly from a two-dimensional projection of helical movement with no numerical fitting, making it computationally very efficient. As a case study, we estimate swimming parameters from videotaped trajectories of a toxic unicellular alga, Heterosigma akashiwo (Raphidophyceae). The algae were reared from five strains originally collected from locations in the Atlantic and Pacific Oceans, where they have caused Harmful Algal Blooms (HABs). We use the CVHM model to quantify cell-level and strain-level differences in all movement parameters, demonstrating the utility of the model for identifying strains that are difficult to distinguish by other means. PMID:20725795

  7. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    PubMed

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms. PMID:26186775

  8. Numerical estimation of transport properties of cementitious materials using 3D digital images

    NASA Astrophysics Data System (ADS)

    Ukrainczyk, N.; Koenders, E. A. B.; van Breugel, K.

    2013-07-01

    A multi-scale characterisation of the transport process within cementitious microstructure possesses a great challenge in terms of modelling and schematization. In this paper a numerical method is proposed to mitigate the resolution problems in numerical methods for calculating effective transport properties of porous materials using 3D digital images. The method up-scales sub-voxel information from the fractional occupancy level of the interface voxels, i.e. voxels containing phaseboundary, to increase the accuracy of the pore schematization and hence the accuracy of the numerical transport calculation as well. The numerical identification of the subvoxels that is associated with their level of occupancy by each phase is obtained by increasing the pre-processing resolution. The proposed method is presented and employed for hydrated cement paste microstructures obtained from Hymostruc, a numerical model for cement hydration and microstructure simulation. The new method significantly reduces computational efforts, is relatively easy to implement, and improves the accuracy of the estimation of the effective transport property.

  9. Angle estimation of simultaneous orthogonal rotations from 3D gyroscope measurements.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation.

  10. Relative Scale Estimation and 3D Registration of Multi-Modal Geometry Using Growing Least Squares.

    PubMed

    Mellado, Nicolas; Dellepiane, Matteo; Scopigno, Roberto

    2016-09-01

    The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them. However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g., sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e., acquired by devices with different properties (e.g., resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points. We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage.

  11. Angle Estimation of Simultaneous Orthogonal Rotations from 3D Gyroscope Measurements

    PubMed Central

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation. PMID:22164090

  12. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  13. High resolution diameter estimation of microthin wires by a novel 3D diffraction model

    NASA Astrophysics Data System (ADS)

    Vyas, Khushi; Lolla, Kameswara Rao

    2011-08-01

    Micro-thin wires are of significant importance to academia, research laboratories as well as industries engaged in micro-fabrication of products related to diverse fields like micromechanics, bio-instrumentation, optoelectronics etc. Critical dimension metrology of such wires often demands diameter estimation with tight tolerances. Amongst other measurement techniques, Optical Diffractometry under Fraunhofer approximation has emerged over years as a nondestructive, robust and precise technique for on-line diameter estimation of thin wires. However, it is observed that existing Fraunhofer models invariably result in experimental overestimation of wire diameter, leading to unacceptable error performances particularly for diameters below 50 μm. In this paper, a novel diffraction model based on Geometric theory is proposed and demonstrated to theoretically quantify this diameter overestimation. The proposed model utilizes hitherto unused paths-ways for the two lateral rays that contribute to the first diffraction minimum. Based the 3-D geometry of the suggested model, a new 'diffraction formulation' is proposed. The theoretical analysis reveals the following. For diffraction experiment, the Actual diameter of the diffracting wire is a function of four parameters: source wavelength 'λ', axial distance 'z', diffraction angle corresponding to first diffraction minimum 'θd' and a newly defined characteristic parameter 'm'. The analysis reveals further that the proposed characteristic parameter 'm' varies non-linearly with diameter and presents a dependence only on the experimentally measured diffraction angle 'θd'. Based on the proposed model, the communication reports for the first time, a novel diameter-inversion procedure which, not only corrects for the overestimated but also facilitates wire diameter-inversion with high resolution. Micro-thin metallic wires having diameters spanning the range 1-50 μm are examined. Experimental results are obtained that

  14. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  15. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  16. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  17. The spatial accuracy of cellular dose estimates obtained from 3D reconstructed serial tissue autoradiographs.

    PubMed

    Humm, J L; Macklis, R M; Lu, X Q; Yang, Y; Bump, K; Beresford, B; Chin, L M

    1995-01-01

    In order to better predict and understand the effects of radiopharmaceuticals used for therapy, it is necessary to determine more accurately the radiation absorbed dose to cells in tissue. Using thin-section autoradiography, the spatial distribution of sources relative to the cells can be obtained from a single section with micrometre resolution. By collecting and analysing serial sections, the 3D microscopic distribution of radionuclide relative to the cellular histology, and therefore the dose rate distribution, can be established. In this paper, a method of 3D reconstruction of serial sections is proposed, and measurements are reported of (i) the accuracy and reproducibility of quantitative autoradiography and (ii) the spatial precision with which tissue features from one section can be related to adjacent sections. Uncertainties in the activity determination for the specimen result from activity losses during tissue processing (4-11%), and the variation of grain count per unit activity between batches of serial sections (6-25%). Correlation of the section activity to grain count densities showed deviations ranging from 6-34%. The spatial alignment uncertainties were assessed using nylon fibre fiduciary markers incorporated into the tissue block, and compared to those for alignment based on internal tissue landmarks. The standard deviation for the variation in nylon fibre fiduciary alignment was measured to be 41 microns cm-1, compared to 69 microns cm-1 when internal tissue histology landmarks were used. In addition, tissue shrinkage during histological processing of up to 10% was observed. The implications of these measured activity and spatial distribution uncertainties upon the estimate of cellular dose rate distribution depends upon the range of the radiation emissions. For long-range beta particles, uncertainties in both the activity and spatial distribution translate linearly to the uncertainty in dose rate of < 15%. For short-range emitters (< 100

  18. Estimation of 3D cardiac deformation using spatio-temporal elastic registration of non-scanconverted ultrasound data

    NASA Astrophysics Data System (ADS)

    Elen, An; Loeckx, Dirk; Choi, Hon Fai; Gao, Hang; Claus, Piet; Maes, Frederik; Suetens, Paul; D'hooge, Jan

    2008-03-01

    Current ultrasound methods for measuring myocardial strain are often limited to measurements in one or two dimensions. Spatio-temporal elastic registration of 3D cardiac ultrasound data can however be used to estimate the 3D motion and full 3D strain tensor. In this work, the spatio-temporal elastic registration method was validated for both non-scanconverted and scanconverted images. This was done using simulated 3D pyramidal ultrasound data sets based on a thick-walled deforming ellipsoid and an adapted convolution model. A B-spline based frame-to-frame elastic registration method was applied to both the scanconverted and non-scanconverded data sets and the accuracy of the resulting deformation fields was quantified. The mean accuracy of the estimated displacement was very similar for the scanconverted and non-scanconverted data sets and thus, it was shown that 3D elastic registration to estimate the cardiac deformation from ultrasound images can be performed on non-scanconverted images, but that avoiding of the scanconversion step does not significantly improve the results of the displacement estimation.

  19. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  20. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  1. Estimation of uncertainties in geological 3D raster layer models as integral part of modelling procedures

    NASA Astrophysics Data System (ADS)

    Maljers, Denise; den Dulk, Maryke; ten Veen, Johan; Hummelman, Jan; Gunnink, Jan; van Gessel, Serge

    2016-04-01

    The Geological Survey of the Netherlands (GSN) develops and maintains subsurface models with regional to national coverage. These models are paramount for petroleum exploration in conventional reservoirs, for understanding the distribution of unconventional reservoirs, for mapping geothermal aquifers, for the potential to store carbon, or for groundwater- or aggregate resources. Depending on the application domain these models differ in depth range, scale, data used, modelling software and modelling technique. Depth uncertainty information is available for the Geological Survey's 3D raster layer models DGM Deep and DGM Shallow. These models cover different depth intervals and are constructed using different data types and different modelling software. Quantifying the uncertainty of geological models that are constructed using multiple data types as well as geological expert-knowledge is not straightforward. Examples of geological expert-knowledge are trend surfaces displaying the regional thickness trends of basin fills or steering points that are used to guide the pinching out of geological formations or the modelling of the complex stratal geometries associated with saltdomes and saltridges. This added a-priori knowledge, combined with the assumptions underlying kriging (normality and second-order stationarity), makes the kriging standard error an incorrect measure of uncertainty for our geological models. Therefore the methods described below were developed. For the DGM Deep model a workflow has been developed to assess uncertainty by combining precision (giving information on the reproducibility of the model results) and accuracy (reflecting the proximity of estimates to the true value). This was achieved by centering the resulting standard deviations around well-tied depths surfaces. The standard deviations are subsequently modified by three other possible error sources: data error, structural complexity and velocity model error. The uncertainty workflow

  2. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  3. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  4. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  5. A Hybrid Antenna Array Design for 3-D Direction of Arrival Estimation

    PubMed Central

    Saqib, Najam-Us; Khan, Imdad

    2015-01-01

    A 3-D beam scanning antenna array design is proposed that gives a whole 3-D spherical coverage and also suitable for various radar and body-worn devices in the Body Area Networks applications. The Array Factor (AF) of the proposed antenna is derived and its various parameters like directivity, Half Power Beam Width (HPBW) and Side Lobe Level (SLL) are calculated by varying the size of the proposed antenna array. Simulations were carried out in MATLAB 2012b. The radiators are considered isotropic and hence mutual coupling effects are ignored. The proposed array shows a considerable improvement against the existing cylindrical and coaxial cylindrical arrays in terms of 3-D scanning, size, directivity, HPBW and SLL. PMID:25790103

  6. A hybrid antenna array design for 3-d direction of arrival estimation.

    PubMed

    Saqib, Najam-Us; Khan, Imdad

    2015-01-01

    A 3-D beam scanning antenna array design is proposed that gives a whole 3-D spherical coverage and also suitable for various radar and body-worn devices in the Body Area Networks applications. The Array Factor (AF) of the proposed antenna is derived and its various parameters like directivity, Half Power Beam Width (HPBW) and Side Lobe Level (SLL) are calculated by varying the size of the proposed antenna array. Simulations were carried out in MATLAB 2012b. The radiators are considered isotropic and hence mutual coupling effects are ignored. The proposed array shows a considerable improvement against the existing cylindrical and coaxial cylindrical arrays in terms of 3-D scanning, size, directivity, HPBW and SLL.

  7. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    PubMed

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal.

  8. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  9. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    PubMed

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  10. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  11. Estimating Hydraulic Conductivities in a Fractured Shale Formation from Pressure Pulse Testing and 3d Modeling

    NASA Astrophysics Data System (ADS)

    Courbet, C.; DICK, P.; Lefevre, M.; Wittebroodt, C.; Matray, J.; Barnichon, J.

    2013-12-01

    logging, porosity varies by a factor of 2.5 whilst hydraulic conductivity varies by 2 to 3 orders of magnitude. In addition, a 3D numerical reconstruction of the internal structure of the fault zone inferred from borehole imagery has been built to estimate the permeability tensor variations. First results indicate that hydraulic conductivity values calculated for this structure are 2 to 3 orders of magnitude above those measured in situ. Such high values are due to the imaging method that only takes in to account open fractures of simple geometry (sine waves). Even though improvements are needed to handle more complex geometry, outcomes are promising as the fault damaged zone clearly appears as the highest permeability zone, where stress analysis show that the actual stress state may favor tensile reopening of fractures. Using shale samples cored from the different internal structures of the fault zone, we aim now to characterize the advection and diffusion using laboratory petrophysical tests combined with radial and through-diffusion experiments.

  12. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  13. Estimating elastic moduli of rocks from thin sections: Digital rock study of 3D properties from 2D images

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Mavko, Gary

    2016-03-01

    Estimation of elastic rock moduli using 2D plane strain computations from thin sections has several numerical and analytical advantages over using 3D rock images, including faster computation, smaller memory requirements, and the availability of cheap thin sections. These advantages, however, must be weighed against the estimation accuracy of 3D rock properties from thin sections. We present a new method for predicting elastic properties of natural rocks using thin sections. Our method is based on a simple power-law transform that correlates computed 2D thin section moduli and the corresponding 3D rock moduli. The validity of this transform is established using a dataset comprised of FEM-computed elastic moduli of rock samples from various geologic formations, including Fontainebleau sandstone, Berea sandstone, Bituminous sand, and Grossmont carbonate. We note that using the power-law transform with a power-law coefficient between 0.4-0.6 contains 2D moduli to 3D moduli transformations for all rocks that are considered in this study. We also find that reliable estimates of P-wave (Vp) and S-wave velocity (Vs) trends can be obtained using 2D thin sections.

  14. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  15. A Bayesian model to estimate the true 3-D shadowing correction in sonic anemometers

    NASA Astrophysics Data System (ADS)

    Frank, J. M.; Massman, W. J.; Ewers, B. E.

    2015-12-01

    Sonic anemometers are the principal instruments used in micrometeorological studies of turbulence and ecosystem fluxes. Recent studies have shown the most common designs underestimate vertical wind measurements because they lack a correction for transducer and structural shadowing; there is no consensus describing a true correction. We introduce a novel Bayesian analysis with the potential to resolve the three-dimensional (3-D) correction by optimizing differences between anemometers mounted simultaneously vertical and horizontal. The analysis creates a geodesic grid around the sonic anemometer, defines a state variable for the 3-D correction at each point, and assigns each a prior distribution based on literature with ±10% uncertainty. We use the Markov chain Monte Carlo (MCMC) method to update and apply the 3-D correction to a dataset of 20-Hz sonic anemometer measurements, calculate five-minute standard deviations of the Cartesian wind components, and compare these statistics between vertical and horizontal anemometers. We present preliminary analysis of the CSAT3 anemometer using 642 grid points (±4.5° resolution) from 423 five-minute periods (8,964,000 samples) collected during field experiments in 2011 and 2013. The 20-Hz data was not equally distributed around the grid; half of the samples occurred in just 8% of the grid points. For populous grid points (weighted by the abundance of samples) the average correction increased from prior to posterior (+5.4±10.0% to +9.1±9.5%) while for desolate grid points (weighted by the sparseness of samples) there was minimal change (+6.4±10.0% versus +6.6±9.8%), demonstrating that with a sufficient number of samples the model can determine the true correction is ~67% higher than proposed in recent literature. Future adaptions will increase the grid resolution and sample size to reduce the uncertainty in the posterior distributions and more precisely quantify the 3-D correction.

  16. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  17. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    NASA Astrophysics Data System (ADS)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  18. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing.

  19. Atmospheric Nitrogen Trifluoride: Optimized emission estimates using 2-D and 3-D Chemical Transport Models from 1973-2008

    NASA Astrophysics Data System (ADS)

    Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.

    2009-12-01

    We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.

  20. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    PubMed

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  1. Volume estimation of cerebral aneurysms from biplane DSA: a comparison with measurements on 3D rotational angiography data

    NASA Astrophysics Data System (ADS)

    Olivan Bescos, Javier; Slob, Marian; Sluzewski, Menno; van Rooij, Willem J.; Slump, Cornelis H.

    2003-05-01

    A cerebral aneurysm is a persistent localized dilatation of the wall of a cerebral vessel. One of the techniques applied to treat cerebral aneurysms is the Guglielmi detachable coil (GDC) embolization. The goal of this technique is to embolize the aneurysm with a mesh of platinum coils to reduce the risk of aneurysm rupture. However, due to the blood pressure it is possible that the platinum wire is deformed. In this case, re-embolization of the aneurysm is necessary. The aim of this project is to develop a computer program to estimate the volume of cerebral aneurysms from archived laser hard copies of biplane digital subtraction angiography (DSA) images. Our goal is to determine the influence of the packing percentage, i.e., the ratio between the volume of the aneurysm and the volume of the coil mesh, on the stability of the coil mesh in time. The method we apply to estimate the volume of the cerebral aneurysms is based on the generation of a 3-D geometrical model of the aneurysm from two biplane DSA images. This 3-D model can be seen as an stack of 2-D ellipsis. The volume of the aneurysm is the result of performing a numerical integration of this stack. The program was validated using balloons filled with contrast agent. The availability of 3-D data for some of the aneurysms enabled to perform a comparison of the results of this method with techniques based on 3-D data.

  2. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System.

    PubMed

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-06-14

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R² = 0.98) and 0.57 mm (R² = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  3. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  4. Transient Hydraulic Tomography in the Field: 3-D K Estimation and Validation in a Highly Heterogeneous Unconfined Aquifer

    NASA Astrophysics Data System (ADS)

    Hochstetler, D. L.; Barrash, W.; Kitanidis, P. K.

    2014-12-01

    Characterizing subsurface hydraulic properties is essential for predicting flow and transport, and thus, for making informed decisions, such as selection and execution of a groundwater remediation strategy; however, obtaining accurate estimates at the necessary resolution with quantified uncertainty is an ongoing challenge. For over a decade, the development of hydraulic tomography (HT) - i.e., conducting a series of discrete interval hydraulic tests, observing distributed pressure signals, and analyzing the data through inversion of all tests together - has shown promise as a subsurface imaging method. Numerical and laboratory 3-D HT studies have enhanced and validated such methodologies, but there have been far fewer 3-D field characterization studies. We use 3-D transient hydraulic tomography (3-D THT) to characterize a highly heterogeneous unconfined alluvial aquifer at an active industrial site near Assemini, Italy. With 26 pumping tests conducted from 13 isolated vertical locations, and pressure responses measured at 63 spatial locations through five clusters of continuous multichannel tubing, we recorded over 800 drawdown curves during the field testing. Selected measurements from each curve were inverted in order to obtain an estimate of the distributed hydraulic conductivity field K(x) as well as uniform ("effective") values of specific storage Ss and specific yield Sy. The estimated K values varied across seven orders of magnitude, suggesting that this is one of the most heterogeneous sites at which HT has ever been conducted. Furthermore, these results are validated using drawdown observations from seven independent tests with pumping performed at multiple locations other than the main pumping well. The validation results are encouraging, especially given the uncertain nature of the problem. Overall, this research demonstrates the ability of 3-D THT to provide high-resolution of structure and local K at a non-research site at the scale of a contaminant

  5. Analysis of passive cardiac constitutive laws for parameter estimation using 3D tagged MRI.

    PubMed

    Hadjicharalambous, Myrianthi; Chabiniok, Radomir; Asner, Liya; Sammut, Eva; Wong, James; Carr-White, Gerald; Lee, Jack; Razavi, Reza; Smith, Nicolas; Nordsletten, David

    2015-08-01

    An unresolved issue in patient-specific models of cardiac mechanics is the choice of an appropriate constitutive law, able to accurately capture the passive behavior of the myocardium, while still having uniquely identifiable parameters tunable from available clinical data. In this paper, we aim to facilitate this choice by examining the practical identifiability and model fidelity of constitutive laws often used in cardiac mechanics. Our analysis focuses on the use of novel 3D tagged MRI, providing detailed displacement information in three dimensions. The practical identifiability of each law is examined by generating synthetic 3D tags from in silico simulations, allowing mapping of the objective function landscape over parameter space and comparison of minimizing parameter values with original ground truth values. Model fidelity was tested by comparing these laws with the more complex transversely isotropic Guccione law, by characterizing their passive end-diastolic pressure-volume relation behavior, as well as by considering the in vivo case of a healthy volunteer. These results show that a reduced form of the Holzapfel-Ogden law provides the best balance between identifiability and model fidelity across the tests considered. PMID:25510227

  6. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  7. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing. PMID:22163926

  8. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  9. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2001-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  10. Effect of GIA models with 3D composite mantle viscosity on GRACE mass balance estimates for Antarctica

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.

    2015-03-01

    Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found

  11. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  12. Intensity of joints associated with an extensional fault zone: an estimation by poly3d .

    NASA Astrophysics Data System (ADS)

    Minelli, G.

    2003-04-01

    The presence and frequency of joints in sedimentary rocks strongly affects the mechanical and fluid flow properties of the host layers. Joints intensity is evaluated by spacing, S, the distance between neighbouring fractures, or by density, D = 1/S. Joint spacing in layered rocks is often linearly related to layer thickness T, with typical values of 0.5 T < S < 2.0 T . On the other hand, some field cases display very tight joints with S << T and nonlinear relations between spacing and thickness , most of these cases are related to joint system “genetically” related to a nearby fault zone. The present study by using the code Poly3D (Rock Fracture Project at Stanford), numerically explores the effect of the stress distribution in the neighbour of an extensional fault zone with respect to the mapped intensity of joints both in the hanging wall and in the foot wall of it (WILLEMSE, E. J. M., 1997; MARTEL, S. J, AND BOGER, W. A,; 1998). Poly3D is a C language computer program that calculates the displacements, strains and stresses induced in an elastic whole or half-space by planar, polygonal-shaped elements of displacement discontinuity (WILLEMSE, E. J. M., POLLARD, D. D., 2000) Dislocations of varying shapes may be combined to yield complex three-dimensional surfaces well-suited for modeling fractures, faults, and cavities in the earth's crust. The algebraic expressions for the elastic fields around a polygonal element are derived by superposing the solution for an angular dislocation in an elastic half-space. The field data have been collected in a quarry located close to Noci town (Puglia) by using the scan line methodology. In this quarry a platform limestone with a regular bedding with very few shale or marly intercalations displaced by a normal fault are exposed. The comparison between the mapped joints intensity and the calculated stress around the fault displays a good agreement. Nevertheless the intrinsic limitations (isotropic medium and elastic behaviour

  13. Anechoic Sphere Phantoms for Estimating 3-D Resolution of Very High Frequency Ultrasound Scanners

    PubMed Central

    Madsen, Ernest L.; Frank, Gary R.; McCormick, Matthew M.; Deaner, Meagan E.; Stiles, Timothy A.

    2013-01-01

    Two phantoms have been constructed for assessing the performance of high frequency ultrasound imagers. They also allow for periodic quality assurance tests. The phantoms contain eight blocks of tissue-mimicking material where each block contains a spatially random distribution of suitably small anechoic spheres having a small distribution of diameters. The eight mean sphere diameters are distributed from 0.10 to 1.09 mm. The two phantoms differ primarily in terms of the backscatter coefficient of the background material in which the spheres are suspended. The mean scatterer diameter for one phantom is larger than that for the other phantom resulting in a lesser increase in backscatter coefficient for the second phantom; however, the backscatter curves cross at about 35 MHz. Since spheres have no preferred orientation, all three (spatial) dimensions of resolution contribute to sphere detection on an equal basis; thus, the resolution is termed 3-D. Two high frequency scanners are compared. One employs single-element (fixed focus) transducers, and the other employs variable focus linear arrays. The nominal frequency for the single element transducers were 25 and 55 MHz and for the linear array transducers were 20, 30 and 40 MHz. The depth range for detection of spheres of each size is determined corresponding to determination of 3-D resolution as a function of depth. As expected, the single-element transducers are severely limited in useful imaging depth ranges compared with the linear arrays. Note that these phantoms could also be useful for training technicians in using higher frequency scanners. PMID:20889416

  14. Application of optical 3D measurement on thin film buckling to estimate interfacial toughness

    NASA Astrophysics Data System (ADS)

    Jia, H. K.; Wang, S. B.; Li, L. A.; Wang, Z. Y.; Goudeau, P.

    2014-03-01

    The shape-from-focus (SFF) method has been widely studied as a passive depth recovery and 3D reconstruction method for digital images. An important step in SFF is the calculation of the focus level for different points in an image by using a focus measure. In this work, an image entropy-based focus measure is introduced into the SFF method to measure the 3D buckling morphology of an aluminum film on a polymethylmethacrylate (PMMA) substrate at a micro scale. Spontaneous film wrinkles and telephone-cord wrinkles are investigated after the deposition of a 300 nm thick aluminum film onto the PMMA substrate. Spontaneous buckling is driven by the highly compressive stress generated in the Al film during the deposition process. The interfacial toughness between metal films and substrates is an important parameter for the reliability of the film/substrate system. The height profiles of different sections across the telephone-cord wrinkle can be considered a straight-sided model with uniform width and height or a pinned circular model that has a delamination region characterized by a sequence of connected sectors. Furthermore, the telephone-cord geometry of the thin film can be used to calculate interfacial toughness. The instability of the finite element model is introduced to fit the buckling morphology obtained by SFF. The interfacial toughness is determined to be 0.203 J/m2 at a 70.4° phase angle from the straight-sided model and 0.105 J/m2 at 76.9° from the pinned circular model.

  15. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  16. CO2 mass estimation visible in time-lapse 3D seismic data from a saline aquifer and uncertainties

    NASA Astrophysics Data System (ADS)

    Ivanova, A.; Lueth, S.; Bergmann, P.; Ivandic, M.

    2014-12-01

    At Ketzin (Germany) the first European onshore pilot scale project for geological storage of CO2 was initiated in 2004. This project is multidisciplinary and includes 3D time-lapse seismic monitoring. A 3D pre-injection seismic survey was acquired in 2005. Then CO2 injection into a sandstone saline aquifer started at a depth of 650 m in 2008. A 1st 3D seismic repeat survey was acquired in 2009 after 22 kilotons had been injected. The imaged CO2 signature was concentrated around the injection well (200-300 m). A 2nd 3D seismic repeat survey was acquired in 2012 after 61 kilotons had been injected. The imaged CO2 signature further extended (100-200 m). The injection was terminated in 2013. Totally 67 kilotons of CO2 were injected. Time-lapse seismic processing, petrophysical data and geophysical logging on CO2 saturation have allowed for an estimate of the amount of CO2 visible in the seismic data. This estimate is dependent upon a choice of a number of parameters and contains a number of uncertainties. The main uncertainties are following. The constant reservoir porosity and CO2 density used for the estimation are probably an over-simplification since the reservoir is quite heterogeneous. May be velocity dispersion is present in the Ketzin reservoir rocks, but we do not consider it to be large enough that it could affect the mass of CO2 in our estimation. There are only a small number of direct petrophysical observations, providing a weak statistical basis for the determination of seismic velocities based on CO2 saturation and we have assumed that the petrophysical experiments were carried out on samples that are representative for the average properties of the whole reservoir. Finally, the most of the time delay values in the both 3D seismic repeat surveys within the amplitude anomaly are near the noise level of 1-2 ms, however a change of 1 ms in the time delay affects significantly the mass estimate, thus the choice of the time-delay cutoff is crucial. In spite

  17. Building continental-scale 3D subsurface layers in the Digital Crust project: constrained interpolation and uncertainty estimation.

    NASA Astrophysics Data System (ADS)

    Yulaeva, E.; Fan, Y.; Moosdorf, N.; Richard, S. M.; Bristol, S.; Peters, S. E.; Zaslavsky, I.; Ingebritsen, S.

    2015-12-01

    The Digital Crust EarthCube building block creates a framework for integrating disparate 3D/4D information from multiple sources into a comprehensive model of the structure and composition of the Earth's upper crust, and to demonstrate the utility of this model in several research scenarios. One of such scenarios is estimation of various crustal properties related to fluid dynamics (e.g. permeability and porosity) at each node of any arbitrary unstructured 3D grid to support continental-scale numerical models of fluid flow and transport. Starting from Macrostrat, an existing 4D database of 33,903 chronostratigraphic units, and employing GeoDeepDive, a software system for extracting structured information from unstructured documents, we construct 3D gridded fields of sediment/rock porosity, permeability and geochemistry for large sedimentary basins of North America, which will be used to improve our understanding of large-scale fluid flow, chemical weathering rates, and geochemical fluxes into the ocean. In this talk, we discuss the methods, data gaps (particularly in geologically complex terrain), and various physical and geological constraints on interpolation and uncertainty estimation.

  18. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  19. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  20. Novel methods for estimating 3D distributions of radioactive isotopes in materials

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Kataoka, J.; Kishimoto, A.; Nishiyama, T.; Taya, T.; Okochi, H.; Ogata, H.; Yamamoto, S.

    2016-09-01

    In recent years, various gamma-ray visualization techniques, or gamma cameras, have been proposed. These techniques are extremely effective for identifying "hot spots" or regions where radioactive isotopes are accumulated. Examples of such would be nuclear-disaster-affected areas such as Fukushima or the vicinity of nuclear reactors. However, the images acquired with a gamma camera do not include distance information between radioactive isotopes and the camera, and hence are "degenerated" in the direction of the isotopes. Moreover, depth information in the images is lost when the isotopes are embedded in materials, such as water, sand, and concrete. Here, we propose two methods of obtaining depth information of radioactive isotopes embedded in materials by comparing (1) their spectra and (2) images of incident gamma rays scattered by the materials and direct gamma rays. In the first method, the spectra of radioactive isotopes and the ratios of scattered to direct gamma rays are obtained. We verify experimentally that the ratio increases with increasing depth, as predicted by simulations. Although the method using energy spectra has been studied for a long time, an advantage of our method is the use of low-energy (50-150 keV) photons as scattered gamma rays. In the second method, the spatial extent of images obtained for direct and scattered gamma rays is compared. By performing detailed Monte Carlo simulations using Geant4, we verify that the spatial extent of the position where gamma rays are scattered increases with increasing depth. To demonstrate this, we are developing various gamma cameras to compare low-energy (scattered) gamma-ray images with fully photo-absorbed gamma-ray images. We also demonstrate that the 3D reconstruction of isotopes/hotspots is possible with our proposed methods. These methods have potential applications in the medical fields, and in severe environments such as the nuclear-disaster-affected areas in Fukushima.

  1. Estimation of 3-D Cloud Effects on TOMS Satellite Retrieval of Surface UV Irradiance

    NASA Technical Reports Server (NTRS)

    Krotkov, Nickolay A.; Geogdzhayev, I.; Herman, J. R.

    1998-01-01

    To improve surface UV irradiance retrieval from the Total Ozone Mapping Spectrometer (TOMS) we simulate errors of the TOMS cloud correction algorithm for summertime broken cloud conditions. Cloud scenes (50 km by 50 km) are modeled by a normal random (Gaussian) field with a fixed lower boundary and conservative scattering. The model relates stochastic field characteristics with the cloud amount, mean cloud diameter and aspect ratio. Clouds are embedded into Rayleigh atmosphere with standard ozone profile. Radiative transfer calculations of the radiance at the top of the atmosphere and irradiance at the surface were performed using 3-D Monte Carlo (MC) code. The results are averaged over the satellite field of view on the surface (50 km by 50 km) and compared with TOMS predicted surface irradiance for the same scene reflectance. The TOMS algorithm assumes horizontally homogeneous Cl-type cloud between 3 km and 5.5 km. The effective optical depth is determined by fitting observed (MC) radiance at 380 nm. Having the same radiance at the satellite the homogeneous and broken cloud models predict different average irradiances at the surface. This is due to the differences in Bidirectional Reflection Distribution Function (BRDF) for homogeneous and broken cloud scenes with the same hemispherical albedo. For typical TOMS observational geometry at mid-latitudes the simulated single pixels errors may be as large as +/- 20%. Qualitatively these errors are due to the dominance of the non-horizontal cloud surfaces, which are not accounted for in the homogeneous cloud model. However, due to high variability of the real cloud shapes and types it is unclear how these single pixel errors would affect TOMS time-integrated UV exposure over extended periods (weeks to months) for different regions.

  2. Suspect Height Estimation Using the Faro Focus(3D) Laser Scanner.

    PubMed

    Johnson, Monique; Liscio, Eugene

    2015-11-01

    At present, very little research has been devoted to investigating the ability of laser scanning technology to accurately measure height from surveillance video. The goal of this study was to test the accuracy of one particular laser scanner to estimate suspect height from video footage. The known heights of 10 individuals were measured using an anthropometer. The individuals were then recorded on video walking along a predetermined path in a simulated crime scene environment both with and without headwear. The difference between the known heights and the estimated heights obtained from the laser scanner software were compared using a one-way t-test. The height estimates obtained from the software were not significantly different from the known heights whether individuals were wearing headwear (p = 0.186) or not (p = 0.707). Thus, laser scanning is one technique that could potentially be used by investigators to determine suspect height from video footage.

  3. Guided wave-based J-integral estimation for dynamic stress intensity factors using 3D scanning laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Ayers, J.; Owens, C. T.; Liu, K. C.; Swenson, E.; Ghoshal, A.; Weiss, V.

    2013-01-01

    The application of guided waves to interrogate remote areas of structural components has been researched extensively in characterizing damage. However, there exists a sparsity of work in using piezoelectric transducer-generated guided waves as a method of assessing stress intensity factors (SIF). This quantitative information enables accurate estimation of the remaining life of metallic structures exhibiting cracks, such as military and commercial transport vehicles. The proposed full wavefield approach, based on 3D laser vibrometry and piezoelectric transducer-generated guided waves, provides a practical means for estimation of dynamic stress intensity factors (DSIF) through local strain energy mapping via the J-integral. Strain energies and traction vectors can be conveniently estimated from wavefield data recorded using 3D laser vibrometry, through interpolation and subsequent spatial differentiation of the response field. Upon estimation of the Jintegral, it is possible to obtain the corresponding DSIF terms. For this study, the experimental test matrix consists of aluminum plates with manufactured defects representing canonical elliptical crack geometries under uniaxial tension that are excited by surface mounted piezoelectric actuators. The defects' major to minor axes ratios vary from unity to approximately 133. Finite element simulations are compared to experimental results and the relative magnitudes of the J-integrals are examined.

  4. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  5. Estimation of vocal fold plane in 3D CT images for diagnosis of vocal fold abnormalities.

    PubMed

    Hewavitharanage, Sajini; Gubbi, Jayavardhana; Thyagarajan, Dominic; Lau, Ken; Palaniswami, Marimuthu

    2015-01-01

    Vocal folds are the key body structures that are responsible for phonation and regulating air movement into and out of lungs. Various vocal fold disorders may seriously impact the quality of life. When diagnosing vocal fold disorders, CT of the neck is the commonly used imaging method. However, vocal folds do not align with the normal axial plane of a neck and the plane containing vocal cords and arytenoids does vary during phonation. It is therefore important to generate an algorithm for detecting the actual plane containing vocal folds. In this paper, we propose a method to automatically estimate the vocal fold plane using vertebral column and anterior commissure localization. Gray-level thresholding, connected component analysis, rule based segmentation and unsupervised k-means clustering were used in the proposed algorithm. The anterior commissure segmentation method achieved an accuracy of 85%, a good estimate of the expert assessment. PMID:26736949

  6. Estimating the 3D pore size distribution of biopolymer networks from directionally biased data.

    PubMed

    Lang, Nadine R; Münster, Stefan; Metzner, Claus; Krauss, Patrick; Schürmann, Sebastian; Lange, Janina; Aifantis, Katerina E; Friedrich, Oliver; Fabry, Ben

    2013-11-01

    The pore size of biopolymer networks governs their mechanical properties and strongly impacts the behavior of embedded cells. Confocal reflection microscopy and second harmonic generation microscopy are widely used to image biopolymer networks; however, both techniques fail to resolve vertically oriented fibers. Here, we describe how such directionally biased data can be used to estimate the network pore size. We first determine the distribution of distances from random points in the fluid phase to the nearest fiber. This distribution follows a Rayleigh distribution, regardless of isotropy and data bias, and is fully described by a single parameter--the characteristic pore size of the network. The bias of the pore size estimate due to the missing fibers can be corrected by multiplication with the square root of the visible network fraction. We experimentally verify the validity of this approach by comparing our estimates with data obtained using confocal fluorescence microscopy, which represents the full structure of the network. As an important application, we investigate the pore size dependence of collagen and fibrin networks on protein concentration. We find that the pore size decreases with the square root of the concentration, consistent with a total fiber length that scales linearly with concentration. PMID:24209841

  7. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  8. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  9. Estimation and 3-D modeling of seismic parameters for fluvial systems

    SciTech Connect

    Brown, R.L.; Levey, R.A.

    1994-12-31

    Borehole measurements of parameters related to seismic propagation (Vp, Vs, Qp and Qs) are seldom available at all the wells within an area of study. Well logs and other available data can be used along with certain results from laboratory measurements to predict seismic parameters at wells where these measurements are not available. Next, three dimensional interpolation techniques based upon geological constraints can then be used to estimate the spatial distribution of geophysical parameters within a given environment. The net product is a more realistic model of the distribution of geophysical parameters which can be used in the design of surface and borehole seismic methods for probing the reservoir.

  10. Automated voxelization of 3D atom probe data through kernel density estimation.

    PubMed

    Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna

    2015-12-01

    Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the γ / γ' interface in a Ni-Al-Cr superalloy. PMID:25825028

  11. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  12. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  13. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  14. Estimation of passive and active properties in the human heart using 3D tagged MRI.

    PubMed

    Asner, Liya; Hadjicharalambous, Myrianthi; Chabiniok, Radomir; Peresutti, Devis; Sammut, Eva; Wong, James; Carr-White, Gerald; Chowienczyk, Philip; Lee, Jack; King, Andrew; Smith, Nicolas; Razavi, Reza; Nordsletten, David

    2016-10-01

    Advances in medical imaging and image processing are paving the way for personalised cardiac biomechanical modelling. Models provide the capacity to relate kinematics to dynamics and-through patient-specific modelling-derived material parameters to underlying cardiac muscle pathologies. However, for clinical utility to be achieved, model-based analyses mandate robust model selection and parameterisation. In this paper, we introduce a patient-specific biomechanical model for the left ventricle aiming to balance model fidelity with parameter identifiability. Using non-invasive data and common clinical surrogates, we illustrate unique identifiability of passive and active parameters over the full cardiac cycle. Identifiability and accuracy of the estimates in the presence of controlled noise are verified with a number of in silico datasets. Unique parametrisation is then obtained for three datasets acquired in vivo. The model predictions show good agreement with the data extracted from the images providing a pipeline for personalised biomechanical analysis.

  15. A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Shamsil, Arefin; Escoto, Abelardo; Naish, Michael D.; Patel, Rajni V.

    2016-03-01

    Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = -0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.

  16. Comparison of parallel and spiral tagged MRI geometries in estimation of 3-D myocardial strains

    NASA Astrophysics Data System (ADS)

    Tustison, Nicholas J.; Amini, Amir A.

    2005-04-01

    Research involving the quantification of left ventricular myocardial strain from cardiac tagged magnetic resonance imaging (MRI) is extensive. Two different imaging geometries are commonly employed by these methodologies to extract longitudinal deformation. We refer to these imaging geometries as either parallel or spiral. In the spiral configuration, four long-axis tagged image slices which intersect along the long-axis of the left ventricle are collected and in the parallel configuration, contiguous tagged long-axis images spanning the width of the left ventricle between the lateral wall and the septum are collected. Despite the number of methodologies using either or both imaging configurations, to date, no comparison has been made to determine which geometry results in more accurate estimation of strains. Using previously published work in which left ventricular myocardial strain is calculated from 4-D anatomical NURBS models, we compare the strain calculated from these two imaging geometries in both simulated tagged MR images for which ground truth strain is available as well as in in vivo data. It is shown that strains calculated using the parallel imaging protocol are more accurate than that calculated using spiral protocol.

  17. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  18. Landscape scale estimation of soil carbon stock using 3D modelling.

    PubMed

    Veronesi, F; Corstanje, R; Mayr, T

    2014-07-15

    Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models.

  19. Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial

    NASA Astrophysics Data System (ADS)

    Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.

    2011-03-01

    Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.

  20. Estimating the detectability of faults in 3D-seismic data - A valuable input to Induced Seismic Hazard Assessment (ISHA)

    NASA Astrophysics Data System (ADS)

    Goertz, A.; Kraft, T.; Wiemer, S.; Spada, M.

    2012-12-01

    In the past several years, some geotechnical operations that inject fluid into the deep subsurface, such as oil and gas development, waste disposal, and geothermal energy development, have been found or suspected to cause small to moderate sized earthquakes. In several cases the largest events occurred on previously unmapped faults, within or in close vicinity to the operated reservoirs. The obvious conclusion drawn from this finding, also expressed in most recently published best practice guidelines and recommendations, is to avoid injecting into faults. Yet, how certain can we be that all faults relevant to induced seismic hazard have been identified, even around well studied sites? Here we present a probabilistic approach to assess the capability of detecting faults by means of 3D seismic imaging. First, we populate a model reservoir with seed faults of random orientation and slip direction. Drawing random samples from a Gutenberg-Richter distribution, each seed fault is assigned a magnitude and corresponding size using standard scaling relations based on a circular rupture model. We then compute the minimum resolution of a 3D seismic survey for given acquisition parameters and frequency bandwidth. Assuming a random distribution of medium properties and distribution of image frequencies, we obtain a probability that a fault of a given size is detected, or respectively overlooked, by the 3D seismic. Weighting the initial Gutenberg-Richter fault size distribution with the probability of imaging a fault, we obtain a modified fault size distribution in the imaged volume from which we can constrain the maximum magnitude to be considered in the seismic hazard assessment of the operation. We can further quantify the value of information associated with the seismic image by comparing the expected insured value loss between the image-weighted and the unweighted hazard estimates.

  1. Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.

    PubMed

    Nosrati, Masoud S; Abugharbieh, Rafeef; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Hamarneh, Ghassan

    2016-01-01

    In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. PMID:26151933

  2. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  3. Estimation of Precambrian basement topography in Central and Southeastern Wisconsin from 3D modeling of gravity and aeromagnetic data

    NASA Astrophysics Data System (ADS)

    Skalbeck, John D.; Koski, Adrian J.; Peterson, Matthew T.

    2014-07-01

    Increased concerns about groundwater resources in Wisconsin have brought about the need for better understanding of the subsurface geologic structure that leads to developing conceptual hydrogeologic models for numerical simulation of groundwater flow. Models are often based on sparse data from well logs usually located large distances apart and limited in depth. Model assumptions based on limited spatial data typically require simplification that may add uncertainty to the simulation results and the accuracy of a groundwater model. Three dimensional (3D) modeling of gravity and aeromagnetic data provides another tool for the groundwater modeler to better constrain the conceptual model of a hydrogeologic system. The area near the Waukesha Fault in southeastern Wisconsin provides an excellent research opportunity for our proposed approach because of the strong gravity and aeromagnetic anomalies associated with the fault, the apparent complexity in fault geometry, and uncertainty in Precambrian basement depth and structure. Fond du Lac County provides another opportunity to apply this approach because the Precambrian basement topography throughout the area is known to be very undulated and this uneven basement surface controls water well yields and creates zones of stagnant water. The results of the 3D modeling of gravity and aeromagnetic data provide a detailed estimation of the Precambrian basement topography in Fond Du Lac County and southeastern Wisconsin that may be useful in determining ground water flow and quality in this region.

  4. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  5. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    NASA Astrophysics Data System (ADS)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  6. Volume of myocardium perfused by coronary artery branches as estimated from 3D micro-CT images of rat hearts

    NASA Astrophysics Data System (ADS)

    Lund, Patricia E.; Naessens, Lauren C.; Seaman, Catherine A.; Reyes, Denise A.; Ritman, Erik L.

    2000-04-01

    Average myocardial perfusion is remarkably consistent throughout the heart wall under resting conditions and the velocity of blood flow is fairly reproducible from artery to artery. Based on these observations, and the fact that flow through an artery is the product of arterial cross-sectional area and blood flow velocity, we would expect the volume of myocardium perfused to be proportional to the cross-sectional area of the coronary artery perfusing that volume of myocardium. This relationship has been confirmed by others in pigs, dogs and humans. To test the body size-dependence of this relationship we used the hearts from rats, 3 through 25 weeks of age. The coronary arteries were infused with radiopaque microfil polymer and the hearts scanned in a micro- CT scanner. Using these 3D images we measured the volume of myocardium and the arterial cross-sectional area of the artery that perfused that volume of myocardium. The average constant of proportionality was found to be 0.15 +/- 0.08 cm3/mm2. Our data showed no statistically different estimates of the constant of proportionality in the rat hearts of different ages nor between the left and right coronary arteries. This constant is smaller than that observed in large animals and humans, but this difference is consistent with the body mass-dependence on metabolic rate.

  7. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  8. Estimation of water saturated permeability of soils, using 3D soil tomographic images and pore-level transport phenomena modelling

    NASA Astrophysics Data System (ADS)

    Lamorski, Krzysztof; Sławiński, Cezary; Barna, Gyöngyi

    2014-05-01

    There are some important macroscopic properties of the soil porous media such as: saturated permeability and water retention characteristics. These soil characteristics are very important as they determine soil transport processes and are commonly used as a parameters of general models of soil transport processes used extensively for scientific developments and engineering practise. These characteristics are usually measured or estimated using some statistical or phenomenological modelling, i.e. pedotransfer functions. On the physical basis, saturated soil permeability arises from physical transport processes occurring at the pore level. Current progress in modelling techniques, computational methods and X-ray micro-tomographic technology gives opportunity to use direct methods of physical modelling for pore level transport processes. Physically valid description of transport processes at micro-scale based on Navier-Stokes type modelling approach gives chance to recover macroscopic porous medium characteristics from micro-flow modelling. Water microflow transport processes occurring at the pore level are dependent on the microstructure of porous body and interactions between the fluid and the medium. In case of soils, i.e. the medium there exist relatively big pores in which water can move easily but also finer pores are present in which water transport processes are dominated by strong interactions between the medium and the fluid - full physical description of these phenomena is a challenge. Ten samples of different soils were scanned using X-ray computational microtomograph. The diameter of samples was 5 mm. The voxel resolution of CT scan was 2.5 µm. Resulting 3D soil samples images were used for reconstruction of the pore space for further modelling. 3D image threshholding was made to determine the soil grain surface. This surface was triangulated and used for computational mesh construction for the pore space. Numerical modelling of water flow through the

  9. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  10. Estimation of bisphenol A-Human toxicity by 3D cell culture arrays, high throughput alternatives to animal tests.

    PubMed

    Lee, Dong Woo; Oh, Woo-Yeon; Yi, Sang Hyun; Ku, Bosung; Lee, Moo-Yeal; Cho, Yoon Hee; Yang, Mihi

    2016-09-30

    Bisphenol A (BPA) has been widely used for manufacturing polycarbonate plastics and epoxy resins and has been extensively tested in animals to predict human toxicity. In order to reduce the use of animals for toxicity assessment and provide further accurate information on BPA toxicity in humans, we encapsulated Hep3B human hepatoma cells in alginate and cultured them in three dimensions (3D) on a micropillar chip coupled to a panel of metabolic enzymes on a microwell chip. As a result, we were able to assess the toxicity of BPA under various metabolic enzyme conditions using a high-throughput and micro assay; sample volumes were nearly 2,000 times less than that required for a 96-well plate. We applied a total of 28 different enzymes to each chip, including 10 cytochrome P450s (CYP450s), 10 UDP-glycosyltransferases (UGTs), 3 sulfotransferases (SULTs), alcohol dehydrogenase (ADH), and aldehyde dehydrogenase 2 (ALDH2). Phase I enzyme mixtures, phase II enzyme mixtures, and a combination of phase I and phase II enzymes were also applied to the chip. BPA toxicity was higher in samples containing CYP2E1 than controls, which contained no enzymes (IC50, 184±16μM and 270±25.8μM, respectively, p<0.01). However, BPA-induced toxicity was alleviated in the presence of ADH (IC50, 337±17.9μM), ALDH2 (335±13.9μM), and SULT1E1 (318±17.7μM) (p<0.05). CYP2E1-mediated cytotoxicity was confirmed by quantifying unmetabolized BPA using HPLC/FD. Therefore, we suggest the present micropillar/microwell chip platform as an effective alternative to animal testing for estimating BPA toxicity via human metabolic systems. PMID:27491884

  11. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  12. Corrections to traditional methods of verifying tangential-breast 3D monitor-unit calculations: use of an equivalent triangle to estimate effective fields.

    PubMed

    Prado, Karl L; Kirsner, Steven M; Erice, Rolly C

    2003-01-01

    This paper describes an innovative method for correctly estimating the effective field size of tangential-breast fields. The method uses an "equivalent triangle" to verify intact breast tangential field monitor-unit settings calculated by a 3D planning system to within 2%. The effects on verification calculations of loss of full scatter due to beam oblique incidence, proximity to field boundaries, and reduced scattering volumes are handled properly. The methodology is validated by comparing calculations performed by the 3D planning system with the respective verification estimates. The accuracy of this technique is established for dose calculations both with and without heterogeneity corrections.

  13. Estimating 3D variation in active-layer thickness beneath arctic streams using ground-penetrating radar

    USGS Publications Warehouse

    Brosten, T.R.; Bradford, J.H.; McNamara, J.P.; Gooseff, M.N.; Zarnetske, J.P.; Bowden, W.B.; Johnston, M.E.

    2009-01-01

    We acquired three-dimensional (3D) ground-penetrating radar (GPR) data across three stream sites on the North Slope, AK, in August 2005, to investigate the dependence of thaw depth on channel morphology. Data were migrated with mean velocities derived from multi-offset GPR profiles collected across a stream section within each of the 3D survey areas. GPR data interpretations from the alluvial-lined stream site illustrate greater thaw depths beneath riffle and gravel bar features relative to neighboring pool features. The peat-lined stream sites indicate the opposite; greater thaw depths beneath pools and shallower thaw beneath the connecting runs. Results provide detailed 3D geometry of active-layer thaw depths that can support hydrological studies seeking to quantify transport and biogeochemical processes that occur within the hyporheic zone.

  14. Phase-Accuracy Comparisons and Improved Far-Field Estimates for 3-D Edge Elements on Tetrahedral Meshes

    NASA Astrophysics Data System (ADS)

    Monk, Peter; Parrott, Kevin

    2001-07-01

    Edge-element methods have proved very effective for 3-D electromagnetic computations and are widely used on unstructured meshes. However, the accuracy of standard edge elements can be criticised because of their low order. This paper analyses discrete dispersion relations together with numerical propagation accuracy to determine the effect of tetrahedral shape on the phase accuracy of standard 3-D edge-element approximations in comparison to other methods. Scattering computations for the sphere obtained with edge elements are compared with results obtained with vertex elements, and a new formulation of the far-field integral approximations for use with edge elements is shown to give improved cross sections over conventional formulations.

  15. Mechanistic and quantitative studies of bystander response in 3D tissues for low-dose radiation risk estimations

    SciTech Connect

    Amundson, Sally A.

    2013-06-12

    We have used the MatTek 3-dimensional human skin model to study the gene expression response of a 3D model to low and high dose low LET radiation, and to study the radiation bystander effect as a function of distance from the site of irradiation with either alpha particles or low LET protons. We have found response pathways that appear to be specific for low dose exposures, that could not have been predicted from high dose studies. We also report the time and distance dependent expression of a large number of genes in bystander tissue. the bystander response in 3D tissues showed many similarities to that described previously in 2D cultured cells, but also showed some differences.

  16. Simultaneous image segmentation and medial structure estimation: application to 2D and 3D vessel tree extraction

    NASA Astrophysics Data System (ADS)

    Makram-Ebeid, Sherif; Stawiaski, Jean; Pizaine, Guillaume

    2011-03-01

    We propose a variational approach which combines automatic segmentation and medial structure extraction in a single computationally efficient algorithm. In this paper, we apply our approach to the analysis of vessels in 2D X-ray angiography and 3D X-ray rotational angiography of the brain. Other variational methods proposed in the literature encode the medial structure of vessel trees as a skeleton with associated vessel radii. In contrast, our method provides a dense smooth level set map which sign provides the segmentation. The ridges of this map define the segmented regions skeleton. The differential structure of the smooth map (in particular the Hessian) allows the discrimination between tubular and other structures. In 3D, both circular and non-circular tubular cross-sections and tubular branching can be handled conveniently. This algorithm allows accurate segmentation of complex vessel structures. It also provides key tools for extracting anatomically labeled vessel tree graphs and for dealing with challenging issues like kissing vessel discrimination and separation of entangled 3D vessel trees.

  17. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  18. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections.

  19. Automatic 3D motion estimation of left ventricle from C-arm rotational angiocardiography using a prior motion model and learning based boundary detector.

    PubMed

    Chen, Mingqing; Zheng, Yefeng; Wang, Yang; Mueller, Kerstin; Lauritsch, Guenter

    2013-01-01

    Compared to pre-operative imaging modalities, it is more convenient to estimate the current cardiac physiological status from C-arm angiocardiography since C-arm is a widely used intra-operative imaging modality to guide many cardiac interventions. The 3D shape and motion of the left ventricle (LV) estimated from rotational angiocardiography provide important cardiac function measurements, e.g., ejection fraction and myocardium motion dyssynchrony. However, automatic estimation of the 3D LV motion is difficult since all anatomical structures overlap on the 2D X-ray projections and the nearby confounding strong image boundaries (e.g., pericardium) often cause ambiguities to LV endocardium boundary detection. In this paper, a new framework is proposed to overcome the aforementioned difficulties: (1) A new learning-based boundary detector is developed by training a boosting boundary classifier combined with the principal component analysis of a local image patch; (2) The prior LV motion model is learned from a set of dynamic cardiac computed tomography (CT) sequences to provide a good initial estimate of the 3D LV shape of different cardiac phases; (3) The 3D motion trajectory is learned for each mesh point; (4) All these components are integrated into a multi-surface graph optimization method to extract the globally coherent motion. The method is tested on seven patient scans, showing significant improvement on the ambiguous boundary cases with a detection accuracy of 2.87 +/- 1.00 mm on LV endocardium boundary delineation in the 2D projections. PMID:24505748

  20. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    It is well-known that the local seismic site effects may have a significant contribution to the intensity of damage and destruction (e.g., Hough et al., 1990; Regnier et al., 2000; Bonnefoy-Claudet et al., 2006; Haase et al., 2010). The thicknesses of sediments, which play a large role in amplification, usually are derived from seismic velocities. At the same time, thickness of sediments may be determined (or defined) on the basis of 3D combined gravity-magnetic modeling joined with available geological materials, seismic data and borehole section examination. Final result of such investigation is a 3D physical-geological model (PGM) reflecting main geological peculiarities of the area under study. Such a combined study needs in application of a reliable 3D mathematical algorithm of computation together with advanced methodology of 3D modeling. For this analysis the developed GSFC software was selected. The GSFC (Geological Space Field Calculation) program was developed for solving a direct 3-D gravity and magnetic prospecting problem under complex geological conditions (Khesin et al., 1996; Eppelbaum and Khesin, 2004). This program has been designed for computing the field of Δg (Bouguer, free-air or observed value anomalies), ΔZ, ΔX, ΔY , ΔT , as well as second derivatives of the gravitational potential under conditions of rugged relief and inclined magnetization. The geological space can be approximated by (1) three-dimensional, (2) semi-infinite bodies and (3) those infinite along the strike closed, L.H. non-closed, R.H. on-closed and open). Geological bodies are approximated by horizontal polygonal prisms. The program has the following main advantages (besides abovementioned ones): (1) Simultaneous computing of gravity and magnetic fields; (2) Description of the terrain relief by irregularly placed characteristic points; (3) Computation of the effect of the earth-air boundary by the method of selection directly in the process of interpretation; (4

  1. The 2D versus 3D imaging trade-off: The impact of over- or under-estimating small throats for simulating permeability in porous media

    NASA Astrophysics Data System (ADS)

    Peters, C. A.; Crandell, L. E.; Um, W.; Jones, K. W.; Lindquist, W. B.

    2011-12-01

    Geochemical reactions in the subsurface can alter the porosity and permeability of a porous medium through mineral precipitation and dissolution. While effects on porosity are relatively well understood, changes in permeability are more difficult to estimate. In this work, pore-network modeling is used to estimate the permeability of a porous medium using pore and throat size distributions. These distributions can be determined from 2D Scanning Electron Microscopy (SEM) images of thin sections or from 3D X-ray Computed Tomography (CT) images of small cores. Each method has unique advantages as well as unique sources of error. 3D CT imaging has the advantage of reconstructing a 3D pore network without the inherent geometry-based biases of 2D images but is limited by resolutions around 1 μm. 2D SEM imaging has the advantage of higher resolution, and the ability to examine sub-grain scale variations in porosity and mineralogy, but is limited by the small size of the sample of pores that are quantified. A pore network model was created to estimate flow permeability in a sand-packed experimental column investigating reaction of sediments with caustic radioactive tank wastes in the context of the Hanford, WA site. Before, periodically during, and after reaction, 3D images of the porous medium in the column were produced using the X2B beam line facility at the National Synchrotron Light Source (NSLS) at Brookhaven National Lab. These images were interpreted using 3DMA-Rock to characterize the pore and throat size distributions. After completion of the experiment, the column was sectioned and imaged using 2D SEM in backscattered electron mode. The 2D images were interpreted using erosion-dilation to estimate the pore and throat size distributions. A bias correction was determined by comparison with the 3D image data. A special image processing method was developed to infer the pore space before reaction by digitally removing the precipitate. The different sets of pore

  2. Simultaneous estimation of size, radial and angular locations of a malignant tumor in a 3-D human breast - A numerical study.

    PubMed

    Das, Koushik; Mishra, Subhash C

    2015-08-01

    This article reports a numerical study pertaining to simultaneous estimation of size, radial location and angular location of a malignant tumor in a 3-D human breast. The breast skin surface temperature profile is specific to a tumor of specific size and location. The temperature profiles are always the Gaussian one, though their peak magnitudes and areas differ according to the size and location of the tumor. The temperature profiles are obtained by solving the Pennes bioheat equation using the finite element method based solver COMSOL 4.3a. With temperature profiles known, simultaneous estimation of size, radial location and angular location of the tumor is done using the curve fitting method. Effect of measurement errors is also included in the study. Estimations are accurate, and since in the inverse analysis, the curve fitting method does not require solution of the governing bioheat equation, the estimation is very fast. PMID:26267509

  3. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  4. Factors contributing to accuracy in the estimation of the woody canopy leaf area density profile using 3D portable lidar imaging.

    PubMed

    Hosoi, Fumiki; Omasa, Kenji

    2007-01-01

    Factors that contribute to the accuracy of estimating woody canopy's leaf area density (LAD) using 3D portable lidar imaging were investigated. The 3D point cloud data for a Japanese zelkova canopy [Zelkova serrata (Thunberg) Makino] were collected using a portable scanning lidar from several points established on the ground and at 10 m above the ground. The LAD profiles were computed using voxel-based canopy profiling (VCP). The best LAD results [a root-mean-square error (RMSE) of 0.21 m(2) m(-3)] for the measurement plot (corresponding to an absolute LAI error of 9.5%) were obtained by compositing the ground-level and 10 m measurements. The factors that most strongly affected estimation accuracy included the presence of non-photosynthetic tissues, distribution of leaf inclination angles, number (N) of incident laser beams in each region within the canopy, and G(theta(m)) (the mean projection of a unit leaf area on a plane perpendicular to the direction of the laser beam at the measurement zenith angle of theta(m)). The influences of non-photosynthetic tissues and leaf inclination angle on the estimates amounted to 4.2-32.7% and 7.2-94.2%, respectively. The RMSE of the LAD estimations was expressed using a function of N and G(theta(m)). PMID:17977852

  5. ESTIMATION OF 3D ALIGNMENT OF EXPRESSWAYS USING CAD DRAWINGS AND GEOMETRY DATA AND VERIFICATION OF THE RESULTS

    NASA Astrophysics Data System (ADS)

    Yamada, Harutoshi; Sekimoto, Yoshihide; Matsubayashi, Yutaka

    Detailed road alignment data are now required to realize lane departure/curve speed warning services and other advanced ITS services and to reduce the green house gas emission from autos. However, the provision of detailed road alignment data is slow in Japan partly because of the considerable costs for collecting and updating these data. In this paper, three-dimensional road alignments, especially the lane data were estimated using the CAD drawings and the alignment of a centerline of expressways. The vertical curvelength(VCL) data did not exist and they were determined to meet the specifications of Road Alignment Ordinance. The estimated three-dimensional alignment data were compared with the five-meter-mesh DEM data of the GSI. It was found that the difference is less than 1m in most cases except specific vertical curve sections where the freedom in determining the VCL is greater.

  6. 3d morphometric analysis of lunar impact craters: a tool for degradation estimates and interpretation of maria stratigraphy

    NASA Astrophysics Data System (ADS)

    Vivaldi, Valerio; Massironi, Matteo; Ninfo, Andrea; Cremonese, Gabriele

    2015-04-01

    In this study we have applied 3D morphometric analysis of impact craters on the Moon by means of high resolution DTMs derived from LROC (Lunar Reconnaissance Orbiter Camera) NAC (Narrow Angle Camera) (0.5 to 1.5 m/pixel). The objective is twofold: i) evaluating crater degradation and ii) exploring the potential of this approach for Maria stratigraphic interpretation. In relation to the first objective we have considered several craters with different diameters representative of the four classes of degradation being C1 the freshest and C4 the most degraded ones (Arthur et al., 1963; Wilhelms, 1987). DTMs of these craters were elaborated according to a multiscalar approach (Wood, 1996) by testing different ranges of kernel sizes (e.g. 15-35-50-75-100), in order to retrieve morphometric variables such as slope, curvatures and openness. In particular, curvatures were calculated along different planes (e.g. profile curvature and plan curvature) and used to characterize the different sectors of a crater (rim crest, floor, internal slope and related boundaries) enabling us to evaluate its degradation. The gradient of the internal slope of different craters representative of the four classes shows a decrease of the slope mean value from C1 to C4 in relation to crater age and diameter. Indeed degradation is influenced by gravitational processes (landslides, dry flows), as well as space weathering that induces both smoothing effects on the morphologies and infilling processes within the crater, with the main results of lowering and enlarging the rim crest, and shallowing the crater depth. As far as the stratigraphic application is concerned, morphometric analysis was applied to recognize morphologic features within some simple craters, in order to understand the stratigraphic relationships among different lava layers within Mare Serenitatis. A clear-cut rheological boundary at a depth of 200 m within the small fresh Linnè crater (diameter: 2.22 km), firstly hypothesized

  7. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  8. Virtual forensic entomology: improving estimates of minimum post-mortem interval with 3D micro-computed tomography.

    PubMed

    Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina

    2012-07-10

    We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected.

  9. A matrix projection method for on line stable estimation of 1D and 3D shear building models

    NASA Astrophysics Data System (ADS)

    Angel García-Illescas, Miguel; Alvarez-Icaza, Luis

    2016-12-01

    An estimation method is presented that combines the use of recursive least squares, a matrix parameterized model, Gershgorin circles and tridiagonal matrices properties to allow the identification of stable shear building models in the presence of low excitation or low damping. The resultant scheme yields a significant reduction on the number of calculations involved, when compared with the standard vector parameterization based schemes. As real buildings are always open loop stable, the use of an stable shear building model for vibration control purposes allows the design of more robust control laws. Extensive simulation results are presented for cases of low excitation comparing the results of using or not this matrix projection method with different sets of initial conditions. Results indicate that the use of this projection method does not have an influence in the recovery of natural frequencies, however, it significantly improves the recovery of mode shapes.

  10. A Satellite-Based Method for Estimating Global Oceanic DMS and Its Application in a 3-D Atmospheric GCM

    SciTech Connect

    Belviso, S.; Moulin, C.; Bopp, L.; Cosme, E.; Chapman, Elaine G.; Aranami, K.

    2003-01-01

    The flux of dimethylsulfide (DMS) from the world's oceans is the largest known source of biogenically-derived reduced sulfur compounds to the atmosphere. Its impact on atmospheric chemistry and radiative transfer is an active area of scientific research, and DMS is routinely included in three-dimensional global climate change and chemical transport models. In such models, DMS fluxes typically are based on global sea surface DMS concentrations and wind-speed-dependent parameterizations of the mass transfer coefficient. We show here how sea surface DMS concentrations can be estimated from satellite-based Sea-viewing Wide Field-of-View Sensor (SeaWiFS) observations of sea surface chlorophyll a. We compare SeaWiFS-derived DMS concentrations for the twelve month period November 1997 through October 1998 with shipboard measurements made in the Pacific and Indian Oceans. The SeaWiFS-derived DMS distributions demonstrate improved capture of DMS spatial variability in Southern Ocean surface waters relative to previous works, but underestimate the amplitude of seasonal DMS variations in this region. Using the three-dimensional Atmospheric General Circulation Model of the Laboratoire de M?orologie Dynamique, model-time-step wind speeds, an atmospheric-stability-dependent parameterization of the mass transfer coefficient, and our SeaWiFS-derived oceanic DMS distributions, we estimate an annual Southern Ocean DMS emission of 6.8 Tg S yr-1. This value represents approximately one-third of the annual global DMS marine emission, and underscores the importance of this region as a source of natural sulfur emissions.

  11. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  12. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. PMID:26795123

  13. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load.

  14. Influence of center of pressure estimation errors on 3D inverse dynamics solutions during gait at different velocities.

    PubMed

    Camargo-Junior, Franklin; Ackermann, Marko; Loss, Jefferson F; Sacco, Isabel C N

    2013-12-01

    The aim of this study was to investigate the effect of errors in the location of the center of pressure (5 and 10 mm) on lower limb joint moment uncertainties at different gait velocities (1.0, 1.5, and 2.0 m/s). Our hypotheses were that the absolute joint moment uncertainties would be gradually reduced from distal to proximal joints and from higher to lower velocities. Joint moments of five healthy young adults were calculated by inverse dynamics using the bottom-up approach, depending on which estimate the uncertainty propagated. Results indicated that there is a linear relationship between errors in center of pressure and joint moment uncertainties. The absolute moment peak uncertainties expressed on the anatomic reference frames decreased from distal to proximal joints, confirming our first hypothesis, except for the abduction moments. There was an increase in moment uncertainty (up to 0.04 N m/kg for the 10 mm error in the center of pressure) from the lower to higher gait velocity, confirming our second hypothesis, although, once again, not for hip or knee abduction. Finally, depending on the plane of movement and the joint, relative uncertainties experienced variation (between 5 and 31%), and the knee joint moments were the most affected.

  15. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  16. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  17. Eye Gaze Tracking using Correlation Filters

    SciTech Connect

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S; Mahallesi, Mevlana; Kayseri, Talas

    2014-01-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  18. Eye gaze tracking using correlation filters

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Bolme, David; Boehnen, Chris

    2014-03-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  19. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  20. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  1. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  2. Gaze shifts and fixations dominate gaze behavior of walking cats.

    PubMed

    Rivers, T J; Sirota, M G; Guttentag, A I; Ogorodnikov, D A; Shah, N A; Beloozerova, I N

    2014-09-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required for successful walking, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5-m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body's speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats' gaze behavior during all locomotor tasks, jointly occupying 62-84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior "gaze stepping". Each gaze shift took gaze to a site approximately 75-80cm in front of the cat, which the cat reached in 0.7-1.2s and 1.1-1.6 strides. Constant gaze occupied only 5-21% of the time cats spent looking at the walking surface.

  3. Gaze shifts and fixations dominate gaze behavior of walking cats.

    PubMed

    Rivers, T J; Sirota, M G; Guttentag, A I; Ogorodnikov, D A; Shah, N A; Beloozerova, I N

    2014-09-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required for successful walking, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5-m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body's speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats' gaze behavior during all locomotor tasks, jointly occupying 62-84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior "gaze stepping". Each gaze shift took gaze to a site approximately 75-80cm in front of the cat, which the cat reached in 0.7-1.2s and 1.1-1.6 strides. Constant gaze occupied only 5-21% of the time cats spent looking at the walking surface. PMID:24973656

  4. Application of the H/V and SPAC Method to Estimate a 3D Shear Wave Velocity Model, in the City of Coatzacoalcos, Veracruz.

    NASA Astrophysics Data System (ADS)

    Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.

    2014-12-01

    Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first

  5. Bayesian Estimation of 3D Non-planar Fault Geometry and Slip: An application to the 2011 Megathrust (Mw 9.1) Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón

    2016-04-01

    Earthquake faults are generally considered planar (or of other simple geometry) in earthquake source parameter estimations. However, simplistic fault geometries likely result in biases in estimated slip distributions and increased fault slip uncertainties. In case of large subduction zone earthquakes, these biases and uncertainties propagate into tsunami waveform modeling and other calculations related to postseismic studies, Coulomb failure stresses, etc. In this research, we parameterize 3D non-planar fault geometry for the 2011 Tohoku-Oki earthquake (Mw 9.1) and estimate these geometrical parameters along with fault slip parameters from onland and offshore GPS using Bayesian inference. This non-planar fault is formed using several 3rd degree polynomials in along-strike (X-Y plane) and along-dip (X-Z plane) directions that are tied together using a triangular mesh. The coefficients of these polynomials constitute the fault geometrical parameters. We use the trench and locations of past seismicity as a priori information to constrain these fault geometrical parameters and the Laplacian to characterize the fault slip smoothness. Hyper-parameters associated to these a priori constraints are estimated empirically and the posterior probability distribution of the model (fault geometry and slip) parameters is sampled using an adaptive Metropolis Hastings algorithm. The across-strike uncertainties in the fault geometry (effectively the local fault location) around high-slip patches increases from 6 km at 10km depth to about 35 km at 50km depth, whereas around low-slip patches the uncertainties are larger (from 7 km to 70 km). Uncertainties in reverse slip are found to be higher at high slip patches than at low slip patches. In addition, there appears to be high correlation between adjacent patches of high slip. Our results demonstrate that we can constrain complex non-planar fault geometry together with fault slip from GPS data using past seismicity as a priori

  6. Gaze as a biometric

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  7. Gaze as a biometric

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  8. Estimation of preferential recharge and saltwater intrusion to a coastal groundwater system in central Vietnam by means of 3D stratigraphic modeling

    NASA Astrophysics Data System (ADS)

    Thanh Tam, Vu; Batelaan, Okke; Thanh Le, Tran

    2013-04-01

    Saltwater intrusion is worldwide regarded as a major threat to groundwater resources. Mostly, saltwater intrusion problems are related to sea water level rise or induced intrusion due to excessive groundwater extraction in coastal aquifers. However, the hydrogeological heterogeneity of the subsurface might play an important role in (non-)intrusion as well. We study local (hydro)geological conditions for preferential recharge as well as saltwater intrusion to a coastal groundwater system in Vietnam where geological formations exhibit highly heterogeneous lithologies. A cluster analysis technique combined with a chronographic marker is used to distinguish and map well-log intervals of similar lithological properties in different geological formations. The cluster analysis is carried out on lithological composition, distribution depth and thickness of each lithological distinctive drilling interval of well-logs of 43 groundwater investigation boreholes carried out within the study area. The chronographic marker is a layer of clay originated from weathered basalt rocks, whose color and lithological properties can be distinguished from the other formations. Detailed to coarse 3D stratigraphic models, based on the above analysis, are constructed and used as a tool to estimate preferential recharge paths and saltwater intrusion to the groundwater system under study. Chemical analysis of groundwater water samples is also used to support the estimation. Result of this research work contributes to the interpretation of why the aquifer system of the study area is almost uninfluenced by saltwater intrusion which is relatively common in coastal aquifers of Vietnam.

  9. Local heat transfer estimation in microchannels during convective boiling under microgravity conditions: 3D inverse heat conduction problem using BEM techniques

    NASA Astrophysics Data System (ADS)

    Luciani, S.; LeNiliot, C.

    2008-11-01

    Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).

  10. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  11. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  12. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  13. 3D Transient Hydraulic Tomography (3DTHT): An Efficient Field and Modeling Method for High-Resolution Estimation of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.

    2012-12-01

    The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3

  14. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  15. Different scenarios for inverse estimation of soil hydraulic parameters from double-ring infiltrometer data using HYDRUS-2D/3D

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Parisa; Ghorbani-Dashtaki, Shoja; Mosaddeghi, Mohammad Reza; Shirani, Hossein; Nodoushan, Ali Reza Mohammadi

    2016-04-01

    In this study, HYDRUS-2D/3D was used to simulate ponded infiltration through double-ring infiltrometers into a hypothetical loamy soil profile. Twelve scenarios of inverse modelling (divided into three groups) were considered for estimation of Mualem-van Genuchten hydraulic parameters. In the first group, simulation was carried out solely using cumulative infiltration data. In the second group, cumulative infiltration data plus water content at h = -330 cm (field capacity) were used as inputs. In the third group, cumulative infiltration data plus water contents at h = -330 cm (field capacity) and h = -15 000 cm (permanent wilting point) were used simultaneously as predictors. The results showed that numerical inverse modelling of the double-ring infiltrometer data provided a reliable alternative method for determining soil hydraulic parameters. The results also indicated that by reducing the number of hydraulic parameters involved in the optimization process, the simulation error is reduced. The best one in infiltration simulation which parameters α, n, and Ks were optimized using the infiltration data and field capacity as inputs. Including field capacity as additional data was important for better optimization/definition of soil hydraulic functions, but using field capacity and permanent wilting point simultaneously as additional data increased the simulation error.

  16. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  17. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods.

    PubMed

    He, Bin; Frey, Eric C

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  18. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  19. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  20. Evaluation of patient DVH-based QA metrics for prostate VMAT: correlation between accuracy of estimated 3D patient dose and magnitude of MLC misalignment.

    PubMed

    Kadoya, Noriyuki; Saito, Masahide; Ogasawara, Makoto; Fujita, Yukio; Ito, Kengo; Sato, Kiyokazu; Kishi, Kazuma; Dobashi, Suguru; Takeda, Ken; Jingu, Keiichi

    2015-05-08

    The purpose of this study was to evaluate the accuracy of commercially available software, using patient DVH-based QA metrics, by investigating the correlation between estimated 3D patient dose and magnitude of MLC misalignments. We tested 3DVH software with an ArcCHECK. Two different calculating modes of ArcCHECK Planned Dose Perturbation (ACPDP) were used: "Normal Sensitivity" and "High Sensitivity". Ten prostate cancer patients treated with hypofractionated VMAT (67.6 Gy/26 Fr) in our hospital were studied. For the baseline plan, we induced MLC errors (-0.75, -0.5, -0.25, 0.25, 0.5, and 0.75 mm for each single bank). We calculated the dose differences between the ACPDP dose with error and TPS dose with error using gamma passing rates and using DVH-based QA metrics. The correlations between dose estimation error and MLC position error varied with each structure and metric. A comparison using 1%/1 mm gamma index showed that the larger was the MLC error-induced, the worse were the gamma passing rates. Slopes of linear fit to dose estimation error versus MLC position error for mean dose and D95 to the PTV were 1.76 and 1.40% mm-1, respectively, for "Normal Sensitivity", and -0.53 and -0.88% mm-1, respectively, for "High Sensitivity", showing better accuracy for "High Sensitivity" than "Normal Sensitivity". On the other hand, the slopes for mean dose to the rectum and bladder, V35 to the rectum and bladder and V55 to the rectum and bladder, were -1.00, -0.55, -2.56, -1.25, -3.53, and -1.85%mm-1, respectively, for "Normal Sensitivity", and -2.89, -2.39, -4.54, -3.12, -6.24, and -4.11% mm-1, respectively, for "High Sensitivity", showing significant better accuracy for "Normal Sensitivity" than "High Sensitivity". Our results showed that 3DVH had some residual error for both sensitivities. Furthermore, we found that "Normal Sensitivity" might have better accuracy for the DVH metric for the PTV and that "High Sensitivity" might have better accuracy for DVH metrics for

  1. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis

    PubMed Central

    Menéndez-González, Manuel; Salas-Pacheco, José M.; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the “yearly rate of Relative Thalamic Atrophy” (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  2. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis.

    PubMed

    Menéndez-González, Manuel; Salas-Pacheco, José M; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the "yearly rate of Relative Thalamic Atrophy" (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications.

  3. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis.

    PubMed

    Menéndez-González, Manuel; Salas-Pacheco, José M; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the "yearly rate of Relative Thalamic Atrophy" (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  6. Mobile gaze tracking system for outdoor walking behavioral studies.

    PubMed

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments.

  7. Mobile gaze tracking system for outdoor walking behavioral studies.

    PubMed

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  8. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  9. Estimation of regional myocardial mass at risk based on distal arterial lumen volume and length using 3D micro-CT images

    PubMed Central

    Le, Huy; Wong, Jerry T.; Molloi, Sabee

    2008-01-01

    The determination of regional myocardial mass at risk distal to a coronary occlusion provides valuable prognostic information for a patient with coronary artery disease. The coronary arterial system follows a design rule which allows for the use of arterial branch length and lumen volume to estimate regional myocardial mass at risk. Image processing techniques, such as segmentation, skeletonization, and arterial network tracking, are presented for extracting anatomical details of the coronary arterial system using micro-computed tomography (CT). Moreover, a method of assigning tissue voxels to their corresponding arterial branches is presented to determine the dependent myocardial region. The proposed micro-CT technique was utilized to investigate the relationship between the sum of the distal coronary arterial branch lengths and volumes to the dependent regional myocardial mass using a polymer cast of a porcine heart. The correlations of the logarithm of the total distal arterial lengths (L) to the logarithm of the regional myocardial mass (M) for the left anterior descending (LAD), left circumflex (LCX) and right coronary (RCA) arteries were log(L) = 0.73log(M)+ 0.09 (R= 0.78), log(L) = 0.82log(M)+ 0.05 (R= 0.77), and log(L) = 0.85log(M)+ 0.05 (R= 0.87)s, respectively. The correlation of the logarithm of the total distal arterial lumen volumes (V) to the logarithm of the regional myocardial mass for the LAD, LCX and RCA were log(V) = 0.93log(M)− 1.65 (R= 0.81), log(V) = 1.02log(M) −1.79 (R= 0.78), and log(V) = 1.17log(M)− 2.10 (R= 0.82), respectively. These morphological relations did not change appreciably for diameter truncations of 600 to 1400 µm. The results indicate that the image processing procedures successfully extracted information from a large 3D dataset of the coronary arterial tree to provide prognostic indications in the form of arterial tree parameters and anatomical area at risk. PMID:18595659

  10. Saliency-based gaze prediction based on head direction.

    PubMed

    Nakashima, Ryoichi; Fang, Yu; Hatori, Yasuhiro; Hiratani, Akinori; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2015-12-01

    Despite decades of attempts to create a model for predicting gaze locations by using saliency maps, a highly accurate gaze prediction model for general conditions has yet to be devised. In this study, we propose a gaze prediction method based on head direction that can improve the accuracy of any model. We used a probability distribution of eye position based on head direction (static eye-head coordination) and added this information to a model of saliency-based visual attention. Using empirical data on eye and head directions while observers were viewing natural scenes, we estimated a probability distribution of eye position. We then combined the relationship between eye position and head direction with visual saliency to predict gaze locations. The model showed that information on head direction improved the prediction accuracy. Further, there was no difference in the gaze prediction accuracy between the two models using information on head direction with and without eye-head coordination. Therefore, information on head direction is useful for predicting gaze location when it is available. Furthermore, this gaze prediction model can be applied relatively easily to many daily situations such as during walking.

  11. Why does gaze enhance mimicry? Placing gaze-mimicry effects in relation to other gaze phenomena.

    PubMed

    Wang, Yin; Hamilton, Antonia F de C

    2014-01-01

    Eye gaze is a powerful signal, which exerts a mixture of arousal, attentional, and social effects on the observer. We recently found a behavioural interaction between eye contact and mimicry where direct gaze rapidly enhanced mimicry of hand movements ). Here, we report two detailed investigations of this effect. In Experiment 1, we compared the effects of "direct gaze", "averted gaze", and "gaze to the acting hand" on mimicry and manipulated the sequence of gaze events within a trial. Only direct gaze immediately before the hand action enhanced mimicry. In Experiment 2, we examined the enhancement of mimicry when direct gaze is followed by a "blink" or by "shut eyes", or by "occluded eyes". Enhanced mimicry relative to baseline was seen only in the blink condition. Together, these results suggest that ongoing social engagement is necessary for enhanced mimicry. These findings allow us to place the gaze-enhancement effect in the context of other reported gaze phenomena. We suggest that this effect is similar to previously reported audience effects, but is less similar to ostensive cueing effects. This has important implications for our theories of the relationships between social cues and imitation.

  12. Estimating a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head from a commercial OCT device

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.

    2016-03-01

    The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.

  13. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  14. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  15. Spatial distribution of Hydrocarbon Reservoirs in the West Korea Bay Basin in the northern part of the Yellow Sea, estimated by 3D gravity forward modeling

    NASA Astrophysics Data System (ADS)

    Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.

    2016-10-01

    Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite- derived potential field data can be alternatively used to image the three-dimensional (3D) density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3D forward density modeling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3D forward modeling showed that there are two modeled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg/m3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.

  16. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  17. Complementary effects of gaze direction and early saliency in guiding fixations during free viewing.

    PubMed

    Borji, Ali; Parks, Daniel; Itti, Laurent

    2014-01-01

    Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered

  18. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  19. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  20. "Beloved" as an Oppositional Gaze

    ERIC Educational Resources Information Center

    Mao, Weiqiang; Zhang, Mingquan

    2009-01-01

    This paper studies the strategy Morrison adopts in "Beloved" to give voice to black Americans long silenced by the dominant white American culture. Instead of being objects passively accepting their aphasia, black Americans become speaking subjects that are able to cast an oppositional gaze to avert the objectifying gaze of white…

  1. Gaze behaviour in hereditary prosopagnosia.

    PubMed

    Schwarzer, Gudrun; Huber, Susanne; Grüter, Martina; Grüter, Thomas; Gross, Cornelia; Hipfel, Melanie; Kennerknecht, Ingo

    2007-09-01

    Prosopagnosia is the inability to recognize someone by the face alone in the absence of sensory or intellectual impairment. In contrast to the acquired form of prosopagnosia we studied the congenital form. Since we could recently show that this form is inherited as a simple monogenic trait we called it hereditary form. To determine whether not only face recognition and neuronal processing but also the perceptual acquisition of facial information is specific to prosopagnosia, we studied the gaze behaviour of four hereditary prosopagnosics in comparison to matched control subjects. This rarely studied form of prosopagnosia ensures that deficits are limited to face recognition. Whereas the control participants focused their gaze on the central facial features, the hereditary prosopagnosics showed a significantly different gaze behaviour. They had a more dispersed gaze and also fixated external facial features. Thus, the face recognition impairment of the hereditary prosopagnosics is reflected in their gaze behaviour.

  2. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  3. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  4. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  5. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  6. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  7. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.

  8. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  9. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  10. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  11. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  12. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  17. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  18. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  19. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method. PMID:22948355

  20. An automated image-based method of 3D subject-specific body segment parameter estimation for kinetic analyses of rapid movements.

    PubMed

    Sheets, Alison L; Corazza, Stefano; Andriacchi, Thomas P

    2010-01-01

    Accurate subject-specific body segment parameters (BSPs) are necessary to perform kinetic analyses of human movements with large accelerations, or no external contact forces or moments. A new automated topographical image-based method of estimating segment mass, center of mass (CM) position, and moments of inertia is presented. Body geometry and volume were measured using a laser scanner, then an automated pose and shape registration algorithm segmented the scanned body surface, and identified joint center (JC) positions. Assuming the constant segment densities of Dempster, thigh and shank masses, CM locations, and moments of inertia were estimated for four male subjects with body mass indexes (BMIs) of 19.7-38.2. The subject-specific BSP were compared with those determined using Dempster and Clauser regression equations. The influence of BSP and BMI differences on knee and hip net forces and moments during a running swing phase were quantified for the subjects with the smallest and largest BMIs. Subject-specific BSP for 15 body segments were quickly calculated using the image-based method, and total subject masses were overestimated by 1.7-2.9%.When compared with the Dempster and Clauser methods, image-based and regression estimated thigh BSP varied more than the shank parameters. Thigh masses and hip JC to thigh CM distances were consistently larger, and each transverse moment of inertia was smaller using the image-based method. Because the shank had larger linear and angular accelerations than the thigh during the running swing phase, shank BSP differences had a larger effect on calculated intersegmental forces and moments at the knee joint than thigh BSP differences did at the hip. It was the net knee kinetic differences caused by the shank BSP differences that were the largest contributors to the hip variations. Finally, BSP differences produced larger kinetic differences for the subject with larger segment masses, suggesting that parameter accuracy is more

  1. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  2. An automated image-based method of 3D subject-specific body segment parameter estimation for kinetic analyses of rapid movements.

    PubMed

    Sheets, Alison L; Corazza, Stefano; Andriacchi, Thomas P

    2010-01-01

    Accurate subject-specific body segment parameters (BSPs) are necessary to perform kinetic analyses of human movements with large accelerations, or no external contact forces or moments. A new automated topographical image-based method of estimating segment mass, center of mass (CM) position, and moments of inertia is presented. Body geometry and volume were measured using a laser scanner, then an automated pose and shape registration algorithm segmented the scanned body surface, and identified joint center (JC) positions. Assuming the constant segment densities of Dempster, thigh and shank masses, CM locations, and moments of inertia were estimated for four male subjects with body mass indexes (BMIs) of 19.7-38.2. The subject-specific BSP were compared with those determined using Dempster and Clauser regression equations. The influence of BSP and BMI differences on knee and hip net forces and moments during a running swing phase were quantified for the subjects with the smallest and largest BMIs. Subject-specific BSP for 15 body segments were quickly calculated using the image-based method, and total subject masses were overestimated by 1.7-2.9%.When compared with the Dempster and Clauser methods, image-based and regression estimated thigh BSP varied more than the shank parameters. Thigh masses and hip JC to thigh CM distances were consistently larger, and each transverse moment of inertia was smaller using the image-based method. Because the shank had larger linear and angular accelerations than the thigh during the running swing phase, shank BSP differences had a larger effect on calculated intersegmental forces and moments at the knee joint than thigh BSP differences did at the hip. It was the net knee kinetic differences caused by the shank BSP differences that were the largest contributors to the hip variations. Finally, BSP differences produced larger kinetic differences for the subject with larger segment masses, suggesting that parameter accuracy is more

  3. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  4. Estimation of the maximum allowable loading amount of COD in Luoyuan Bay by a 3-D COD transport and transformation model

    NASA Astrophysics Data System (ADS)

    Wu, Jialin; Li, Keqiang; Shi, Xiaoyong; Liang, Shengkang; Han, Xiurong; Ma, Qimin; Wang, Xiulin

    2014-08-01

    The rapid economic and social developments in the Luoyuan and Lianjiang counties of Fujian Province, China, raise certain environment and ecosystem issues. The unusual phytoplankton bloom and eutrophication, for example, have increased in severity in Luoyuan Bay (LB). The constant increase of nutrient loads has largely caused the environmental degradation in LB. Several countermeasures have been implemented to solve these environmental problems. The most effective of these strategies is the reduction of pollutant loadings into the sea in accordance with total pollutant load control (TPLC) plans. A combined three-dimensional hydrodynamic transport-transformation model was constructed to estimate the marine environmental capacity of chemical oxygen demand (COD). The allowed maximum loadings for each discharge unit in LB were calculated with applicable simulation results. The simulation results indicated that the environmental capacity of COD is approximately 11×104 t year-1 when the water quality complies with the marine functional zoning standards for LB. A pollutant reduction scheme to diminish the present levels of mariculture- and domestic-based COD loadings is based on the estimated marine COD environmental capacity. The obtained values imply that the LB waters could comply with the targeted water quality criteria. To meet the revised marine functional zoning standards, discharge loadings from discharge units 1 and 11 should be reduced to 996 and 3236 t year-1, respectively.

  5. Using strain parameters from 3D restoration modelling to estimate distant off-fault gold potentials, Mount Pleasant Area, Western Australia

    NASA Astrophysics Data System (ADS)

    Kakurina, M.; Mejia-Herrera, P.; Royer, J. J.

    2015-12-01

    Gold deposits are used to be related to fault systems that control metals' transport and accumulation through relatively high permeable discontinuous structures. However, some coeval gold deposits occur at locations far from the main faults. In this case, the permeability of the rock mass is caused by internal damage developed during a deformation event. It is possible to model such development using restoration tools and, consequently, to estimate the strain tensor that measures the deformation. This contribution may provide an explanation of such off-fault gold deposits and certain deformation parameters may be used for a new targeting in exploration surveys. In the present research two SKUA-GOCAD restoration methods were applied to the golden-rich Mount Pleasant area in Western Australia and compared afterwards. One of the restoration methods, provided by the RestorationLab plugin, is based on the Finite Element method and requires a geomechanical model of the area. Another method, GeoChron, is based on the transformation of the coordinates of the present time to a new curvilinear coordinate system of the depositional time. The resulting strain tensors of both methods were used to calculate deformation attributes which together with dilation were studied to estimate their correlation with known gold occurrences using logistic regression function. Therefore, some of the attributes obtained by the RestorationLab approach show higher probability to observe the gold deposit, however, the highest correlation with the gold occurrences was achieved with the gradient of the deformation attribute, obtained by the GeoChron.

  6. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  7. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  8. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  9. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  10. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2014-12-01

    We have developed a novel framework ("Tan-Tracker") for assimilating observations of atmospheric CO2 concentrations, based on the POD-based (proper orthogonal decomposition) ensemble four-dimensional variational data assimilation method (PODEn4DVar). The high flexibility and the high computational efficiency of the PODEn4DVar approach allow us to include both the atmospheric CO2 concentrations and the surface CO2 fluxes as part of the large state vector to be simultaneously estimated from assimilation of atmospheric CO2 observations. Compared to most modern top-down flux inversion approaches, where only surface fluxes are considered as control variables, one major advantage of our joint data assimilation system is that, in principle, no assumption on perfect transport models is needed. In addition, the possibility for Tan-Tracker to use a complete dynamic model to consistently describe the time evolution of CO2 surface fluxes (CFs) and the atmospheric CO2 concentrations represents a better use of observation information for recycling the analyses at each assimilation step in order to improve the forecasts for the following assimilations. An experimental Tan-Tracker system has been built based on a complete augmented dynamical model, where (1) the surface atmosphere CO2 exchanges are prescribed by using a persistent forecasting model for the scaling factors of the first-guess net CO2 surface fluxes and (2) the atmospheric CO2 transport is simulated by using the GEOS-Chem three-dimensional global chemistry transport model. Observing system simulation experiments (OSSEs) for assimilating synthetic in situ observations of surface CO2 concentrations are carefully designed to evaluate the effectiveness of the Tan-Tracker system. In particular, detailed comparisons are made with its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation

  11. Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Kutzer, M. D.; Taylor, R. H.; Armand, M.

    2014-03-01

    Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator's attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2-2.6) and out-of-plane angulations (0°-90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, +/-10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°-60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a

  12. Estimation of pulmonary arterial volume changes in the normal and hypertensive fawn-hooded rat from 3D micro-CT data

    NASA Astrophysics Data System (ADS)

    Molthen, Robert C.; Wietholt, Christian; Haworth, Steven T.; Dawson, Christopher A.

    2002-04-01

    In the study of pulmonary vascular remodeling, much can be learned from observing the morphological changes undergone in the pulmonary arteries of the rat lung when exposed to chronic hypoxia or other challenges which elicit a remodeling response. Remodeling effects include thickening of vessel walls, and loss of wall compliance. Morphometric data can be used to localize the hemodynamic and functional consequences. We developed a CT imaging method for measuring the pulmonary arterial tree over a range of pressures in rat lungs. X-ray micro-focal isotropic volumetric imaging of the arterial tree in the intact rat lung provides detailed information on the size, shape and mechanical properties of the arterial network. In this study, we investigate the changes in arterial volume with step changes in pressure for both normoxic and hypoxic Fawn-Hooded (FH) rats. We show that FH rats exposed to hypoxia tend to have reduced arterial volume changes for the same preload when compared to FH controls. A secondary objective of this work is to quantify various phenotypes to better understand the genetic contribution of vascular remodeling in the lungs. This volume estimation method shows promise in high throughput phenotyping, distinguishing differences in the pulmonary hypertensive rat model.

  13. Precambrian Basement Surface Estimation using Coupled 3D Modeling of Gravity and Aeromagnetic Data in Southeastern Wisconsin and Fond Du Lac County

    NASA Astrophysics Data System (ADS)

    Skalbeck, J.; Koski, A. J.

    2011-12-01

    Increased concerns about groundwater resources in Wisconsin have brought about the need for better understanding of the subsurface geologic structure that lead to developing conceptual hydrogeologic models for numerical simulation of groundwater flow. Models are often based on sparse data from well logs usually located large distances apart and limited in depth. Model assumptions based on limited spatial data typically requires simplification that may add uncertainty to the simulation results and the accuracy of a groundwater model. This research provides another tool for the groundwater modeler to better constrain the conceptual model of a hydrogeologic system. The area near the Waukesha Fault in southeastern Wisconsin provides an excellent research opportunity for our proposed approach because of the strong gravity and aeromagnetic anomalies associated with the fault, the apparent complexity in fault geometry, and uncertainty in Precambrian basement depth and structure. Precambrian basement surface throughout Fond du Lac County is known to be very undulated and this uneven basement topography controls water well yields and zones of stagnant water. Therefore, an accurate estimation of the basement topography in Fond Du Lac County is vital to determining ground water flow and quality of groundwater in this region.

  14. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  15. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  16. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  17. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2013-09-01

    To quantitatively estimate CO2 surface fluxes (CFs) from atmospheric observations, a joint data assimilation system ("Tan-Tracker") is developed by incorporating a joint data assimilation framework into the GEOS-Chem atmospheric transport model. In Tan-Tracker, we choose an identity operator as the CF dynamical model to describe the CFs' evolution, which constitutes an augmented dynamical model together with the GEOS-Chem atmospheric transport model. In this case, the large-scale vector made up of CFs and CO2 concentrations is taken as the prognostic variable for the augmented dynamical model. And thus both CO2 concentrations and CFs are jointly assimilated by using the atmospheric observations (e.g., the in-situ observations or satellite measurements). In contrast, in the traditional joint data assimilation frameworks, CFs are usually treated as the model parameters and form a state-parameter augmented vector jointly with CO2 concentrations. The absence of a CF dynamical model will certainly result in a large waste of observed information since any useful information for CFs' improvement achieved by the current data assimilation procedure could not be used in the next assimilation cycle. Observing system simulation experiments (OSSEs) are carefully designed to evaluate the Tan-Tracker system in comparison to its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation precision for both CO2 concentrations and CO2 fluxes, mainly due to the simultaneous assimilation of CO2 concentrations and CFs in our Tan-Tracker data assimilation system.

  18. To Gaze or Not to Gaze: Visual Communication in Eastern Zaire. Sociolinguistic Working Paper Number 87.

    ERIC Educational Resources Information Center

    Blakely, Thomas D.

    The nature of gazing at someone or something, as a form of communication among the Bahemba people in eastern Zaire, is analyzed across a range of situations. Variations of steady gazing, a common eye contact routine, are outlined, including: (1) negative non-gazing or glance routines, especially in situations in which gazing would ordinarily…

  19. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  20. Teachers' Responses to Children's Eye Gaze

    ERIC Educational Resources Information Center

    Doherty-Sneddon, Gwyneth; Phelps, Fiona G.

    2007-01-01

    When asked questions, children often avert their gaze. Furthermore, the frequency of such gaze aversion (GA) is related to the difficulty of cognitive processing, suggesting that GA is a good indicator of children's thinking and comprehension. However, little is known about how teachers detect and interpret such gaze signals. In Study 1 teaching…

  1. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  2. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  3. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  4. SB3D User Manual, Santa Barbara 3D Radiative Transfer Model

    SciTech Connect

    O'Hirok, William

    1999-01-01

    SB3D is a three-dimensional atmospheric and oceanic radiative transfer model for the Solar spectrum. The microphysics employed in the model are the same as used in the model SBDART. It is assumed that the user of SB3D is familiar with SBDART and IDL. SB3D differs from SBDART in that computations are conducted on media in three-dimensions rather than a single column (i.e. plane-parallel), and a stochastic method (Monte Carlo) is employed instead of a numerical approach (Discrete Ordinates) for estimating a solution to the radiative transfer equation. Because of these two differences between SB3D and SBDART, the input and running of SB3D is more unwieldy and requires compromises between model performance and computational expense. Hence, there is no one correct method for running the model and the user must develop a sense to the proper input and configuration of the model.

  5. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  6. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  7. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  8. Auto-focusing method for remote gaze tracking camera

    NASA Astrophysics Data System (ADS)

    Lee, Won Oh; Lee, Hyeon Chang; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2012-06-01

    Gaze tracking determines what a user is looking at; the key challenge is to obtain well-focused eye images. This is not easy because the human eye is very small, whereas the required resolution of the image should be large enough for accurate detection of the pupil center. In addition, capturing a user's eye image by a remote gaze tracking system within a large working volume at a long Z distance requires a panning/tilting mechanism with a zoom lens, which makes it more difficult to acquire focused eye images. To solve this problem, a new auto-focusing method for remote gaze tracking is proposed. The proposed approach is novel in the following four ways: First, it is the first research on an auto-focusing method for a remote gaze tracking system. Second by using user-dependent calibration at initial stage, the weakness of the previous methods that use facial width in captured image to estimate Z distance between a user and camera, wherein each person has the individual variation of facial width, is solved. Third, the parameters of the modeled formula for estimating the Z distance are adaptively updated using the least squares regression method. Therefore, the focus becomes more accurate over time. Fourth, the relationship between the parameters and the face width is fitted locally according to the Z distance instead of by global fitting, which can enhance the accuracy of Z distance estimation. The results of an experiment with 10,000 images of 10 persons showed that the mean absolute error between the ground-truth Z distance measured by a Polhemus Patriot device and that estimated by the proposed method was 4.84 cm. A total of 95.61% of the images obtained by the proposed method were focused and could be used for gaze detection.

  9. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  10. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  13. Mixed body- and gaze-centered coding of proprioceptive reach targets after effector movement.

    PubMed

    Mueller, Stefanie; Fiehler, Katja

    2016-07-01

    Previous studies demonstrated that an effector movement intervening between encoding and reaching to a proprioceptive target determines the underlying reference frame: proprioceptive reach targets are represented in a gaze-independent reference frame if no movement occurs but are represented with respect to gaze after an effector movement (Mueller and Fiehler, 2014a). The present experiment explores whether an effector movement leads to a switch from a gaze-independent, body-centered reference frame to a gaze-dependent reference frame or whether a gaze-dependent reference frame is employed in addition to a gaze-independent, body-centered reference frame. Human participants were asked to reach in complete darkness to an unseen finger (proprioceptive target) of their left target hand indicated by a touch. They completed 2 conditions in which the target hand remained either stationary at the target location (stationary condition) or was actively moved to the target location, received a touch and was moved back before reaching to the target (moved condition). We dissociated the location of the movement vector relative to the body midline and to the gaze direction. Using correlation and regression analyses, we estimated the contribution of each reference frame based on horizontal reach errors in the stationary and moved conditions. Gaze-centered coding was only found in the moved condition, replicating our previous results. Body-centered coding dominated in the stationary condition while body- and gaze-centered coding contributed equally strong in the moved condition. Our results indicate a shift from body-centered to combined body- and gaze-centered coding due to an effector movement before reaching towards proprioceptive targets.

  14. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  15. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  16. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  17. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  18. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ). PMID:26440264

  19. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  2. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  3. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  4. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  5. Infants understand the referential nature of human gaze but not robot gaze.

    PubMed

    Okumura, Yuko; Kanakogi, Yasuhiro; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2013-09-01

    Infants can acquire much information by following the gaze direction of others. This type of social learning is underpinned by the ability to understand the relationship between gaze direction and a referent object (i.e., the referential nature of gaze). However, it is unknown whether human gaze is a privileged cue for information that infants use. Comparing human gaze with nonhuman (robot) gaze, we investigated whether infants' understanding of the referential nature of looking is restricted to human gaze. In the current study, we developed a novel task that measured by eye-tracking infants' anticipation of an object from observing an agent's gaze shift. Results revealed that although 10- and 12-month-olds followed the gaze direction of both a human and a robot, only 12-month-olds predicted the appearance of objects from referential gaze information when the agent was the human. Such a prediction for objects reflects an understanding of referential gaze. Our study demonstrates that by 12 months of age, infants hold referential expectations specifically from the gaze shift of humans. These specific expectations from human gaze may enable infants to acquire various information that others convey in social learning and social interaction.

  6. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  7. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  8. Image compression and decompression based on gazing area

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Endo, Chizuko; Haneishi, Hideaki; Miyake, Yoichi

    1996-04-01

    In this paper, we introduce a new method of data compression and decompression technique to search the aimed image based on the gazing area of the image. Many methods of data compression have been proposed. Particularly, JPEG compression technique has been widely used as a standard method. However, this method is not always effective to search the aimed images from the image filing system. In a previous paper, by the eye movement analysis, we found that images have a particular gazing area. It is considered that the gazing area is the most important region of the image, then we considered introducing the information to compress and transmit the image. A method named fixation based progressive image transmission is introduced to transmit the image effectively. In this method, after the gazing area is estimated, the area is first transmitted and then the other regions are transmitted. If we are not interested in the first transmitted image, then we can search other images. Therefore, the aimed image can be searched from the filing system, effectively. We compare the searching time of the proposed method with the conventional method. The result shows that the proposed method is faster than the conventional one to search the aimed image.

  9. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  10. 3D stochastic inversion of magnetic data

    NASA Astrophysics Data System (ADS)

    Shamsipour, Pejman; Chouteau, Michel; Marcotte, Denis

    2011-04-01

    A stochastic inversion method based on a geostatistical approach is presented to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. Cokriging, the method which is used in this paper, is a method of estimation that minimizes the theoretical estimation error variance by using auto- and cross-correlations of several variables. The covariances for total field, susceptibility and total field-susceptibility are estimated using the observed data. Then, the susceptibility is cokriged or simulated as the primary variable. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. The algorithm assumes there is no remanent magnetization and the observation data represent only induced magnetization effects. The method is applied on different synthetic models to demonstrate its suitability for 3D inversion of magnetic data. A case study using ground measurements of total field at the Perseverance mine (Quebec, Canada) is presented. The recovered 3D susceptibility model provides beneficial information that can be used to analyze the geology of massive sulfide for the domain under study.

  11. Crystal ball gazing

    NASA Technical Reports Server (NTRS)

    Gettys, Jim

    1992-01-01

    Over the last seven years, the CPU on my desk has increased speed by two orders of magnitude, from around 1 MIP to more than 100 MIPS; more important is that it is about as fast as any uniprocessor of any type available for any price, for compute bound problems. Memory on the system is also about 100 times as big, while disk is only about 10 times as big. Local network and I/O performance have increased greatly, though not quite at the same rate as processor speed. More important, I will argue, is that the CPU's address space is 64 bits, rather than 32 bits, allowing us to rethink some time honored presumptions. The Internet has gone from a few hundred machines to a million, and now have grown to span the entire globe, and wide area networks have now becoming commercial services. 'PC's' are now real computers, bringing what was top of the line computing capability to the masses only a few years behind the leading edge. So even a year or two from now, we can anticipate commonplace desktop machines running at speeds hundreds of MIPS, with main memories in the hundreds of megabytes to a gigabyte, able to draw millions of vectors/second, and all capable of some reasonable 3D graphics. And only a few years later, this will be the $1500 PC. So the 1990's certainly brings: 64 bit processors becoming standard; BIP/BFLOP class uniprocessors; large scale multiprocessors for special purpose applications; I/O as the most significant computer engineering problem; Hierarchical data servers in everyday use; routine access to archived data around the world; and what else? What do systems such as those we will have this decade imply to those building data analysis systems today? Many of the presumptions of the 1970's and 1980's need to be reexamined in the light of 1990's technology.

  12. Multibaseline IFSAR for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Ertin, Emre; Moses, Randolph L.; Potter, Lee C.

    2008-04-01

    We consider three dimensional target construction from SAR data collected on multiple complete circular apertures at different elevation angle. The 3-D resolution of circular SAR systems is constrained by two factors: the sparse sampling in elevation and the limited azimuthal persistence of the reflectors in the scene. Three dimensional target reconstruction with multipass circular SAR data is further complicated by nonuniform elevation spacing in real flight paths and non-constant elevation angle throughout the circular pass. In this paper we first develop parametric spectral estimation methods that extend standard IFSAR method of height estimation to apertures at more than two elevation angles. Next, we show that linear interpolation of the phase history data leads to unsatisfactory performance in 3-D reconstruction from nonuniformly sampled elevation passes. We then present a new sparsity regularized interpolation algorithm to preprocess nonuniform elevation samples to create a virtual uniform linear array geometry. We illustrate the performance of the proposed method using simulated backscatter data.

  13. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  14. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  15. Gaze contingent hologram synthesis for holographic head-mounted display

    NASA Astrophysics Data System (ADS)

    Hong, Jisoo; Kim, Youngmin; Hong, Sunghee; Shin, Choonsung; Kang, Hoonjong

    2016-03-01

    Development of display and its related technologies provides immersive visual experience with head-mounted-display (HMD). However, most available HMDs provide 3D perception only by stereopsis, lack of accommodation depth cues. Recently, holographic HMD (HHMD) arises as one viable option to resolve this problem because hologram is known to provide full set of depth cues including accommodation. Moreover, by virtue of increasing computational power, hologram synthesis from 3D object represented by point cloud can be calculated in real time even with rigorous Rayleigh-Sommerfeld diffraction formula. However, in HMD, rapid gaze change of the user requires much faster refresh rate, which means that much faster hologram synthesis is indispensable in HHMD. Because the visual acuity falls off in the visual periphery, we propose here to accelerate synthesizing hologram by differentiating density of point cloud projected on the screen. We classify the screen into multiple layers which are concentric circles with different radii, where the center is aligned with gaze of user. Layer with smaller radius is closer to the region of interest, hence, assigned with higher density of point cloud. Because the computation time is directly related to the number of points in point cloud, we can accelerate synthesizing hologram by lowering density of point cloud in the visual periphery. Cognitive study reveals that user cannot discriminate those degradation in the visual periphery if the parameters are properly designed. Prototype HHMD system will be provided for verifying the feasibility of our method, and detailed design scheme will be discussed.

  16. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  17. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  18. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  19. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  20. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  1. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  2. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  3. Follow my eyes: the gaze of politicians reflexively captures the gaze of ingroup voters.

    PubMed

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention.

  4. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  5. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  6. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  7. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  8. A Study on Reducing the Detection Errors of the Center of the Iris for an Eye-Gaze Interface System

    NASA Astrophysics Data System (ADS)

    Yonezawa, Tetsuya; Ogata, Kohichi; Matsumoto, Kohei; Hirase, Suguru; Shiratani, Kazuyuki; Kido, Daisuke; Nishimura, Masashi

    We have developed an eye-gaze interface system. The purpose of this system is to develop an easy to use system for spreading to ordinary people as well as people with motor-disabilities of upper limbs. Our system uses a compact video camera and a Windows PC equipped with a frame grabber. This system detects the eye-gaze position on a computer display through the detection of the center of the iris from a captured eye image. When a user gazes at a peripheral point on the display, eye-gaze detection has slightly poorer accuracy because of influences of inner and outer corners of the eye. In this paper, we propose two new procedures to overcome the problem. We show the evaluation of these procedures through the comparing experiments on the accuracy of the eye-gaze estimation.

  9. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  10. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  11. Long-Range Gaze Tracking System for Large Movements.

    PubMed

    Cho, Dong-Chan; Kim, Whoi-Yul

    2013-12-01

    In the vision-based remote gaze tracking systems, the most challenging topics are to allow natural movement of a user and to increase the working volume and distance of the system. Several eye gaze estimation methods considering the natural movement of a user have been proposed. However, their working volume and distance are narrow and close. In this paper, we propose a novel 2-D mapping-based gaze estimation method that allows large-movement of user. Conventional 2-D mapping-based methods utilize mapping function between calibration points on the screen and pupil center corneal reflection (PCCR) vectors obtained in user calibration step. However, PCCR vectors and their associated mapping function are only valid at or near to the position where the user calibration is performed. The proposed movement mapping function, complementing the user's movement, estimates scale factors between two PCCR vector sets: one obtained at the user calibration position and another obtained at the new user position. The proposed system targets a longer range gaze tracking which operates from 1.4 to 3 m. A narrow-view camera mounted on a pan and tilt unit is used by the proposed system to capture high-resolution eye image, providing a wide and long working volume of about 100 cm × 40 cm × 100 cm. The experimental results show that the proposed method successfully compensated the poor performance due to user's large movement. Average angular error was 0.8° and only 0.07° of angular error was increased while the user moved around 81 cm. PMID:23751947

  12. Development of Gaze Aversion in Children.

    ERIC Educational Resources Information Center

    Scheman, Judith D.; Lockard, Joan S.

    1979-01-01

    An observer stared continually at each of 573 children who passed along a definable pathway in a large shopping center. Most infants did not make eye contact with the observer, the majority of toddlers established eye contact but did not gaze avert, and the preponderance of school-age children gaze averted. (Author/JMB)

  13. Gaze Following: Why (Not) Learn It?

    ERIC Educational Resources Information Center

    Triesch, Jochen; Teuscher, Christof; Deak, Gedeon O.; Carlson, Eric

    2006-01-01

    We propose a computational model of the emergence of gaze following skills in infant-caregiver interactions. The model is based on the idea that infants learn that monitoring their caregiver's direction of gaze allows them to predict the locations of interesting objects or events in their environment (Moore & Corkum, 1994). Elaborating on this…

  14. Culture and Listeners' Gaze Responses to Stuttering

    ERIC Educational Resources Information Center

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  15. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  16. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  17. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  18. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  19. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  20. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  1. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  2. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  3. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  4. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  5. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  6. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  7. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection

    PubMed Central

    Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole

    2016-01-01

    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048

  8. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  9. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  10. Group Differences in the Mutual Gaze of Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Bard, Kim A.; Myowa-Yamakoshi, Masako; Tomonaga, Masaki; Tanaka, Masayuki; Costall, Alan; Matsuzawa, Tetsuro

    2005-01-01

    A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic…

  11. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  12. [Computer-assisted 3D phonetography].

    PubMed

    Neuschaefer-Rube, C; Klajman, S

    1996-10-01

    Profiles of fundamental frequency sound pressure levels and voice duration are measured separately in clinical practice. It was the aim of the present study to combine the two examinations, in order to estimate the relationship between pitch, sound pressure level and voice duration and to develop a new computer-assisted graph. A three-dimensional (3D) wireframe phonogram was constructed based on SPL profiles to obtain a general view of the parameters recorded. We have termed this "phonetography". Variable further projections were selected for the analysis of different aspects of parametric relationships. The results in 21 healthy volunteers and 4 patients with hyperfunctional dysphonias demonstrated that there were three typical figures of the 3D phonograms produced, depending on the relationship between voice duration when soft ("piano") compared to loud ("forte"). In one-third of the healthy volunteers, the values of the piano voice duration were greater than those of forte for almost all pitches examined. In two-thirds of the healthy subjects the values of forte voice duration were partly greater, as were those of piano voice duration. All of the patients showed voice duration values greater for forte than for piano. The results of the study demonstrate that the 3D phonogram is a useful tool for obtaining new insights into various relationships of voice parameters.

  13. Timescales of quartz crystallization estimated from glass inclusion faceting using 3D propagation phase-contrast x-ray tomography: examples from the Bishop (California, USA) and Oruanui (Taupo Volcanic Zone, New Zealand) Tuffs

    NASA Astrophysics Data System (ADS)

    Pamukcu, A.; Gualda, G. A.; Anderson, A. T.

    2012-12-01

    Compositions of glass inclusions have long been studied for the information they provide on the evolution of magma bodies. Textures - sizes, shapes, positions - of glass inclusions have received less attention, but they can also provide important insight into magmatic processes, including the timescales over which magma bodies develop and erupt. At magmatic temperatures, initially round glass inclusions will become faceted (attain a negative crystal shape) through the process of dissolution and re-precipitation, such that the extent to which glass inclusions are faceted can be used to estimate timescales. The size and position of the inclusion within a crystal will influence how much faceting occurs: a larger inclusion will facet more slowly; an inclusion closer to the rim will have less time to facet. As a result, it is critical to properly document the size, shape, and position of glass inclusions to assess faceting timescales. Quartz is an ideal mineral to study glass inclusion faceting, as Si is the only diffusing species of concern, and Si diffusion rates are relatively well-constrained. Faceting time calculations to date (Gualda et al., 2012) relied on optical microscopy to document glass inclusions. Here we use 3D propagation phase-contrast x-ray tomography to image glass inclusions in quartz. This technique enhances inclusion edges such that images can be processed more successfully than with conventional tomography. We have developed a set of image processing tools to isolate inclusions and more accurately obtain information on the size, shape, and position of glass inclusions than with optical microscopy. We are studying glass inclusions from two giant tuffs. The Bishop Tuff is ~1000 km3 of high-silica rhyolite ash fall, ignimbrite, and intracaldera deposits erupted ~760 ka in eastern California (USA). Glass inclusions in early-erupted Bishop Tuff range from non-faceted to faceted, and faceting times determined using both optical microscopy and x

  14. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  15. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  16. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  17. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  18. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  19. Gaze cueing by pareidolia faces

    PubMed Central

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process. PMID:25165505

  20. Gaze cueing by pareidolia faces.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  1. Gravitation in 3D Spacetime

    NASA Astrophysics Data System (ADS)

    Laubenstein, John; Cockream, Kandi

    2009-05-01

    3D spacetime was developed by the IWPD Scale Metrics (SM) team using a coordinate system that translates n dimensions to n-1. 4-vectors are expressed in 3D along with a scaling factor representing time. Time is not orthogonal to the three spatial dimensions, but rather in alignment with an object's axis-of-motion. We have defined this effect as the object's ``orientation'' (X). The SM orientation (X) is equivalent to the orientation of the 4-velocity vector positioned tangent to its worldline, where X-1=θ+1 and θ is the angle of the 4-vector relative to the axis-of -motion. Both 4-vectors and SM appear to represent valid conceptualizations of the relationship between space and time. Why entertain SM? Scale Metrics gravity is quantized and may suggest a path for the full unification of gravitation with quantum theory. SM has been tested against current observation and is in agreement with the age of the universe, suggests a physical relationship between dark energy and dark matter, is in agreement with the accelerating expansion rate of the universe, contributes to the understanding of the fine-structure constant and provides a physical explanation of relativistic effects.

  2. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  3. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  4. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  5. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  6. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  7. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  8. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  9. A 3-D SAR approach to IFSAR processing

    SciTech Connect

    DOERRY,ARMIN W.; BICKEL,DOUGLAS L.

    2000-03-01

    Interferometric SAR (IFSAR) can be shown to be a special case of 3-D SAR image formation. In fact, traditional IFSAR processing results in the equivalent of merely a super-resolved, under-sampled, 3-D SAR image. However, when approached as a 3-D SAR problem, a number of IFSAR properties and anomalies are easily explained. For example, IFSAR decorrelation with height is merely ordinary migration in 3-D SAR. Consequently, treating IFSAR as a 3-D SAR problem allows insight and development of proper motion compensation techniques and image formation operations to facilitate optimal height estimation. Furthermore, multiple antenna phase centers and baselines are easily incorporated into this formulation, providing essentially a sparse array in the elevation dimension. This paper shows the Polar Format image formation algorithm extended to 3 dimensions, and then proceeds to apply it to the IFSAR collection geometry. This suggests a more optimal reordering of the traditional IFSAR processing steps.

  10. 3D SAR approach to IF SAR processing

    NASA Astrophysics Data System (ADS)

    Doerry, Armin W.; Bickel, Doug

    2000-08-01

    Interferometric SAR (IFSAR) can be shown to be a special case of 3-D SAR image formation. In fact, traditional IFSAR processing results in the equivalent of merely a super- resolved, under-sampled, 3-D SAR image. However, when approached as a 3-D SAR problem, a number of IFSAR properties and anomalies are easily explained. For example, IFSAR decorrelation with height is merely ordinary migration in 3-D SAR. Consequently, treating IFSAR as a 3-D SAR problem allows insight and development of proper motion compensation techniques and image formation operations to facilitate optimal height estimation. Furthermore, multiple antenna phase centers and baselines are easily incorporated into this formulation, providing essentially a sparse array in the elevation dimension. This paper shows the Polar Format image formation algorithm extended to 3 dimensions, and then proceeds to apply it to the IFSAR collection geometry. This suggests a more optimal reordering of the traditional IFSAR processing steps.

  11. The PRISM3D paleoenvironmental reconstruction

    USGS Publications Warehouse

    Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.

    2010-01-01

    The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may

  12. Infants' developing understanding of social gaze.

    PubMed

    Beier, Jonathan S; Spelke, Elizabeth S

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? The authors addressed this question by investigating infants' understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between 2 people in mutual versus averted gaze, and expected a person to look at her social partner during conversation. In contrast, 9-month-old infants showed neither ability, even when provided with information that highlighted the gazer's social goals. These results indicate considerable improvement in infants' abilities to analyze the social gaze of others toward the end of their 1st year, which may relate to their appreciation of gaze as both a social and goal-directed action. PMID:22224547

  13. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  14. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  15. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  16. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  17. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  18. Conducting Polymer 3D Microelectrodes

    PubMed Central

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; Castillo-León, Jaime; Emnéus, Jenny; Svendsen, Winnie E.

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. PMID:22163508

  19. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  20. 3D model of bow shocks

    NASA Astrophysics Data System (ADS)

    Gustafsson, M.; Ravkilde, T.; Kristensen, L. E.; Cabrit, S.; Field, D.; Pineau Des Forêts, G.

    2010-04-01

    Context. Shocks produced by outflows from young stars are often observed as bow-shaped structures in which the H2 line strength and morphology are characteristic of the physical and chemical environments and the velocity of the impact. Aims: We present a 3D model of interstellar bow shocks propagating in a homogeneous molecular medium with a uniform magnetic field. The model enables us to estimate the shock conditions in observed flows. As an example, we show how the model can reproduce rovibrational H2 observations of a bow shock in OMC1. Methods: The 3D model is constructed by associating a planar shock with every point on a 3D bow skeleton. The planar shocks are modelled with a highly sophisticated chemical reaction network that is essential for predicting accurate shock widths and line emissions. The shock conditions vary along the bow surface and determine the shock type, the local thickness, and brightness of the bow shell. The motion of the cooling gas parallel to the bow surface is also considered. The bow shock can move at an arbitrary inclination to the magnetic field and to the observer, and we model the projected morphology and radial velocity distribution in the plane-of-sky. Results: The morphology of a bow shock is highly dependent on the orientation of the magnetic field and the inclination of the flow. Bow shocks can appear in many different guises and do not necessarily show a characteristic bow shape. The ratio of the H2 v = 2-1 S(1) line to the v = 1-0 S(1) line is variable across the flow and the spatial offset between the peaks of the lines may be used to estimate the inclination of the flow. The radial velocity comes to a maximum behind the apparent apex of the bow shock when the flow is seen at an inclination different from face-on. Under certain circumstances the radial velocity of an expanding bow shock can show the same signatures as a rotating flow. In this case a velocity gradient perpendicular to the outflow direction is a projection

  1. Coordinating spatial referencing using shared gaze.

    PubMed

    Neider, Mark B; Chen, Xin; Dickinson, Christopher A; Brennan, Susan E; Zelinsky, Gregory J

    2010-10-01

    To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.

  2. Gaze following: why (not) learn it?

    PubMed

    Triesch, Jochen; Teuscher, Christof; Deák, Gedeon O; Carlson, Eric

    2006-03-01

    We propose a computational model of the emergence of gaze following skills in infant-caregiver interactions. The model is based on the idea that infants learn that monitoring their caregiver's direction of gaze allows them to predict the locations of interesting objects or events in their environment (Moore & Corkum, 1994). Elaborating on this theory, we demonstrate that a specific Basic Set of structures and mechanisms is sufficient for gaze following to emerge. This Basic Set includes the infant's perceptual skills and preferences, habituation and reward-driven learning, and a structured social environment featuring a caregiver who tends to look at things the infant will find interesting. We review evidence that all elements of the Basic Set are established well before the relevant gaze following skills emerge. We evaluate the model in a series of simulations and show that it can account for typical development. We also demonstrate that plausible alterations of model parameters, motivated by findings on two different developmental disorders - autism and Williams syndrome - produce delays or deficits in the emergence of gaze following. The model makes a number of testable predictions. In addition, it opens a new perspective for theorizing about cross-species differences in gaze following.

  3. Coordinating spatial referencing using shared gaze.

    PubMed

    Neider, Mark B; Chen, Xin; Dickinson, Christopher A; Brennan, Susan E; Zelinsky, Gregory J

    2010-10-01

    To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental. PMID:21037172

  4. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  5. ECG gated tomographic reconstruction for 3-D rotational coronary angiography

    PubMed Central

    Hu, Yining; Xie, Lizhe; Nunes, Jean Claude; Bellanger, Jean Jacques; Bedossa, Marc; Toumoulin, Christine

    2010-01-01

    A method is proposed for 3-D reconstruction of coronary from a limited number of projections in rotational angiography. A Bayesian maximum a posteriori (MAP) estimation is applied with a Poisson distributed projection to reconstruct the 3D coronary tree at a given instant of the cardiac cycle. Several regularizers are investigated L0-norm, L1 and L2 -norm in order to take into account the sparsity of the data. Evaluations are reported on simulated data obtained from a 3D dynamic sequence acquired on a 64-slice GE LightSpeed CT scan. A performance study is conducted to evaluate the quality of the reconstruction of the structures. PMID:21096844

  6. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  7. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  8. Re-Encountering Individuals Who Previously Engaged in Joint Gaze Modulates Subsequent Gaze Cueing

    ERIC Educational Resources Information Center

    Dalmaso, Mario; Edwards, S. Gareth; Bayliss, Andrew P.

    2016-01-01

    We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants'…

  9. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  10. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  11. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  12. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  13. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  14. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  15. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  16. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  17. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  18. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  19. Atypical face gaze in autism.

    PubMed

    Trepagnier, Cheryl; Sebrechts, Marc M; Peterson, Rebecca

    2002-06-01

    An eye-tracking study of face and object recognition was conducted to clarify the character of face gaze in autistic spectrum disorders. Experimental participants were a group of individuals diagnosed with Asperger's disorder or high-functioning autistic disorder according to their medical records and confirmed by the Autism Diagnostic Interview-Revised (ADI-R). Controls were selected on the basis of age, gender, and educational level to be comparable to the experimental group. In order to maintain attentional focus, stereoscopic images were presented in a virtual reality (VR) headset in which the eye-tracking system was installed. Preliminary analyses show impairment in face recognition, in contrast with equivalent and even superior performance in object recognition among participants with autism-related diagnoses, relative to controls. Experimental participants displayed less fixation on the central face than did control-group participants. The findings, within the limitations of the small number of subjects and technical difficulties encountered in utilizing the helmet-mounted display, suggest an impairment in face processing on the part of the individuals in the experimental group. This is consistent with the hypothesis of disruption in the first months of life, a period that may be critical to typical social and cognitive development, and has important implications for selection of appropriate targets of intervention.

  20. Atypical face gaze in autism.

    PubMed

    Trepagnier, Cheryl; Sebrechts, Marc M; Peterson, Rebecca

    2002-06-01

    An eye-tracking study of face and object recognition was conducted to clarify the character of face gaze in autistic spectrum disorders. Experimental participants were a group of individuals diagnosed with Asperger's disorder or high-functioning autistic disorder according to their medical records and confirmed by the Autism Diagnostic Interview-Revised (ADI-R). Controls were selected on the basis of age, gender, and educational level to be comparable to the experimental group. In order to maintain attentional focus, stereoscopic images were presented in a virtual reality (VR) headset in which the eye-tracking system was installed. Preliminary analyses show impairment in face recognition, in contrast with equivalent and even superior performance in object recognition among participants with autism-related diagnoses, relative to controls. Experimental participants displayed less fixation on the central face than did control-group participants. The findings, within the limitations of the small number of subjects and technical difficulties encountered in utilizing the helmet-mounted display, suggest an impairment in face processing on the part of the individuals in the experimental group. This is consistent with the hypothesis of disruption in the first months of life, a period that may be critical to typical social and cognitive development, and has important implications for selection of appropriate targets of intervention. PMID:12123243

  1. Searching for a perceived gaze direction using eye tracking.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2011-01-01

    The purpose of the current study was to use eye tracking to better understand the "stare-in-the-crowd effect"-the notion that direct gaze is more easily detected than averted gaze in a crowd of opposite-gaze distractors. Stimuli were displays of four full characters aligned across the monitor (one target and three distractors). Participants completed a visual search task in which they were asked to detect the location of either a direct gaze or an averted gaze target. Reaction time (RT) results indicated faster responses to direct than averted gaze only for characters situated in the far peripheral visual fields. Eye movements confirmed a serial search strategy (definitely ruling out any pop-out effects) and revealed different exploration patterns between hemifields. The latency before the first fixation on target strongly correlated with response RTs. In the LVF, that latency was also faster for direct than averted gaze targets, suggesting that the response asymmetry in favor of direct gaze stemmed from faster direct gaze target detection. In the RVF, however, the response bias to direct gaze seemed not due to a faster visual detection but rather to a different cognitive mechanism. Direct gaze targets were also responded to even faster when their position was congruent with the direction of gaze of distractors. These findings suggest that the detection asymmetry for direct gaze is highly dependent on target position and influenced by social contexts.

  2. Searching for a perceived gaze direction using eye tracking

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2014-01-01

    The purpose of the current study was to use eye tracking to better understand the “stare-in-the-crowd effect”—the notion that direct gaze is more easily detected than averted gaze in a crowd of opposite-gaze distractors. Stimuli were displays of four full characters aligned across the monitor (one target and three distractors). Participants completed a visual search task in which they were asked to detect the location of either a direct gaze or an averted gaze target. Reaction time (RT) results indicated faster responses to direct than averted gaze only for characters situated in the far peripheral visual fields. Eye movements confirmed a serial search strategy (definitely ruling out any pop-out effects) and revealed different exploration patterns between hemifields. The latency before the first fixation on target strongly correlated with response RTs. In the LVF, that latency was also faster for direct than averted gaze targets, suggesting that the response asymmetry in favor of direct gaze stemmed from faster direct gaze target detection. In the RVF, however, the response bias to direct gaze seemed not due to a faster visual detection but rather to a different cognitive mechanism. Direct gaze targets were also responded to even faster when their position was congruent with the direction of gaze of distractors. These findings suggest that the detection asymmetry for direct gaze is highly dependent on target position and influenced by social contexts. PMID:21367758

  3. IFSAR processing for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2005-05-01

    In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.

  4. Forward ramp in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mars Pathfinder's forward rover ramp can be seen successfully unfurled in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This ramp was not used for the deployment of the microrover Sojourner, which occurred at the end of Sol 2. When this image was taken, Sojourner was still latched to one of the lander's petals, waiting for the command sequence that would execute its descent off of the lander's petal.

    The image helped Pathfinder scientists determine whether to deploy the rover using the forward or backward ramps and the nature of the first rover traverse. The metallic object at the lower left of the image is the lander's low-gain antenna. The square at the end of the ramp is one of the spacecraft's magnetic targets. Dust that accumulates on the magnetic targets will later be examined by Sojourner's Alpha Proton X-Ray Spectrometer instrument for chemical analysis. At right, a lander petal is visible.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. 3D grain boundary migration

    NASA Astrophysics Data System (ADS)

    Becker, J. K.; Bons, P. D.

    2009-04-01

    Microstructures of rocks play an important role in determining rheological properties and help to reveal the processes that lead to their formation. Some of these processes change the microstructure significantly and may thus have the opposite effect in obliterating any fabrics indicative of the previous history of the rocks. One of these processes is grain boundary migration (GBM). During static recrystallisation, GBM may produce a foam texture that completely overprints a pre-existing grain boundary network and GBM actively influences the rheology of a rock, via its influence on grain size and lattice defect concentration. We here present a new numerical simulation software that is capable of simulating a whole range of processes on the grain scale (it is not limited to grain boundary migration). The software is polyhedron-based, meaning that each grain (or phase) is represented by a polyhedron that has discrete boundaries. The boundary (the shell) of the polyhedron is defined by a set of facets which in turn is defined by a set of vertices. Each structural entity (polyhedron, facets and vertices) can have an unlimited number of parameters (depending on the process to be modeled) such as surface energy, concentration, etc. which can be used to calculate changes of the microstructre. We use the processes of grain boundary migration of a "regular" and a partially molten rock to demonstrate the software. Since this software is 3D, the formation of melt networks in a partially molten rock can also be studied. The interconnected melt network is of fundamental importance for melt segregation and migration in the crust and mantle and can help to understand the core-mantle differentiation of large terrestrial planets.

  6. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  7. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  8. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  9. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  10. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  11. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  12. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  13. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  14. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  15. Visual Foraging With Fingers and Eye Gaze.

    PubMed

    Jóhannesson, Ómar I; Thornton, Ian M; Smith, Irene J; Chetverikov, Andrey; Kristjánsson, Árni

    2016-03-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  16. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  17. Gaze location prediction for broadcast football video.

    PubMed

    Cheng, Qin; Agrafiotis, Dimitris; Achim, Alin M; Bull, David R

    2013-12-01

    The sensitivity of the human visual system decreases dramatically with increasing distance from the fixation location in a video frame. Accurate prediction of a viewer's gaze location has the potential to improve bit allocation, rate control, error resilience, and quality evaluation in video compression. Commercially, delivery of football video content is of great interest because of the very high number of consumers. In this paper, we propose a gaze location prediction system for high definition broadcast football video. The proposed system uses knowledge about the context, extracted through analysis of a gaze tracking study that we performed, to build a suitable prior map. We further classify the complex context into different categories through shot classification thus allowing our model to prelearn the task pertinence of each object category and build the prior map automatically. We thus avoid the limitation of assigning the viewers a specific task, allowing our gaze prediction system to work under free-viewing conditions. Bayesian integration of bottom-up features and top-down priors is finally applied to predict the gaze locations. Results show that the prediction performance of the proposed model is better than that of other top-down models that we adapted to this context. PMID:23996558

  18. Visual Foraging With Fingers and Eye Gaze.

    PubMed

    Jóhannesson, Ómar I; Thornton, Ian M; Smith, Irene J; Chetverikov, Andrey; Kristjánsson, Árni

    2016-03-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  19. Social orienting in gaze leading: a mechanism for shared attention.

    PubMed

    Edwards, S Gareth; Stephenson, Lisa J; Dalmaso, Mario; Bayliss, Andrew P

    2015-08-01

    Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to 'gaze following', attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that 'follows' the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish 'shared attention' and maintain the ongoing interaction.

  20. Eye gaze is not coded by cardinal mechanisms alone

    PubMed Central

    Cheleski, Dominic J.; Mareschal, Isabelle; Calder, Andrew J.; Clifford, Colin W. G.

    2013-01-01

    Gaze is an important social cue in regulating human and non-human interactions. In this study, we employed an adaptation paradigm to examine the mechanisms underlying the perception of another's gaze. Previous research has shown that the interleaved presentation of leftwards and rightwards gazing adaptor stimuli results in observers judging a wider range of gaze deviations as being direct. We applied a similar paradigm to examine how human observers encode oblique (e.g. upwards and to the left) directions of gaze. We presented observers with interleaved gaze adaptors and examined whether adaptation differed between congruent (adaptor and test along same axis) and incongruent conditions. We find greater adaptation in congruent conditions along cardinal (horizontal and vertical) and non-cardinal (oblique) directions suggesting gaze is not coded alone by cardinal mechanisms. Our results suggest that the functional aspects of gaze processing might parallel that of basic visual features such as orientation. PMID:23782886

  1. Social orienting in gaze leading: a mechanism for shared attention

    PubMed Central

    Edwards, S. Gareth; Stephenson, Lisa J.; Dalmaso, Mario; Bayliss, Andrew P.

    2015-01-01

    Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to ‘gaze following’, attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that ‘follows’ the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction. PMID:26180071

  2. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  3. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-01

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging. PMID:25836861

  4. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  5. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  6. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  7. Multigrid calculations of 3-D turbulent viscous flows

    NASA Technical Reports Server (NTRS)

    Yokota, Jeffrey W.

    1989-01-01

    Convergence properties of a multigrid algorithm, developed to calculate compressible viscous flows, are analyzed by a vector sequence eigenvalue estimate. The full 3-D Reynolds-averaged Navier-Stokes equations are integrated by an implicit multigrid scheme while a k-epsilon turbulence model is solved, uncoupled from the flow equations. Estimates of the eigenvalue structure for both single and multigrid calculations are compared in an attempt to analyze the process as well as the results of the multigrid technique. The flow through an annular turbine is used to illustrate the scheme's ability to calculate complex 3-D flows.

  8. RELAP5-3D User Problems

    SciTech Connect

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  9. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  10. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  11. Gaze Following Is Modulated by Expectations Regarding Others’ Action Goals

    PubMed Central

    Perez-Osorio, Jairo; Müller, Hermann J.; Wiese, Eva; Wykowska, Agnieszka

    2015-01-01

    Humans attend to social cues in order to understand and predict others’ behavior. Facial expressions and gaze direction provide valuable information to infer others’ mental states and intentions. The present study examined the mechanism of gaze following in the context of participants’ expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent. PMID:26606534

  12. A Direct Link between Gaze Perception and Social Attention

    ERIC Educational Resources Information Center

    Bayliss, Andrew P.; Bartlett, Jessica; Naughtin, Claire K.; Kritikos, Ada

    2011-01-01

    How information is exchanged between the cognitive mechanisms responsible for gaze perception and social attention is unclear. These systems could be independent; the "gaze cueing" effect could emerge from the activation of a general-purpose attentional mechanism that is ignorant of the social nature of the gaze cue. Alternatively, orienting to…

  13. Children with ASD Can Use Gaze to Map New Words

    ERIC Educational Resources Information Center

    Bean Ellawadi, Allison; McGregor, Karla K.

    2016-01-01

    Background: The conclusion that children with autism spectrum disorders (ASD) do not use eye gaze in the service of word learning is based on one-trial studies. Aims: To determine whether children with ASD come to use gaze in the service of word learning when given multiple trials with highly reliable eye-gaze cues. Methods & Procedures:…

  14. Pipe3D, a pipeline to analyze Integral Field Spectroscopy Data: I. New fitting philosophy of FIT3D

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosález-Ortega, F. F.; Cano-Dí az, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-04-01

    We present an improved version of FIT3D, a fitting tool for the analysis of the spectroscopic properties of the stellar populations and the ionized gas derived from moderate resolution spectra of galaxies. This tool was developed to analyze integral field spectroscopy data and it is the basis of Pipe3D, a pipeline used in the analysis of CALIFA, MaNGA, and SAMI data. We describe the philosophy and each step of the fitting procedure. We present an extensive set of simulations in order to estimate the precision and accuracy of the derived parameters for the stellar populations and the ionized gas. We report on the results of those simulations. Finally, we compare the results of the analysis using FIT3D with those provided by other widely used packages, and we find that the parameters derived by FIT3D are fully compatible with those derived using these other tools.

  15. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  16. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  17. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  18. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  19. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  20. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  1. How the gaze of others influences object processing.

    PubMed

    Becchio, Cristina; Bertone, Cesare; Castiello, Umberto

    2008-07-01

    An aspect of gaze processing, which so far has been given little attention, is the influence that intentional gaze processing can have on object processing. Converging evidence from behavioural neuroscience and developmental psychology strongly suggests that objects falling under the gaze of others acquire properties that they would not display if not looked at. Specifically, observing another person gazing at an object enriches that object of motor, affective and status properties that go beyond its chemical or physical structure. A conceptual analysis of available evidence leads to the conclusion that gaze has the potency to transfer to the object the intentionality of the person looking at it.

  2. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  3. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  4. Depth discrimination from occlusions in 3D clutter.

    PubMed

    Langer, Michael S; Zheng, Haomin; Rezvankhah, Shayan

    2016-09-01

    Objects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded. Occlusions provide depth information, but it is unknown whether visual systems use occlusion cues to aid depth perception in 3D clutter, as previous studies have addressed occlusions for simple scene geometries only. Here, we present a set of depth discrimination experiments that examine depth from occlusion cues in 3D clutter, and how these cues interact with stereo and motion parallax. We identify two probabilistic occlusion cues. The first is based on the fraction of an object that is visible. The second is based on the depth range of the occluders. We show that human observers use both of these occlusion cues. We also define ideal observers that are based on these occlusion cues. Human observer performance is close to ideal using the visibility cue but far from ideal using the range cue. A key reason for the latter is that the range cue depends on depth estimation of the clutter itself which is unreliable. Our results provide new fundamental constraints on the depth information that is available from occlusions in 3D clutter, and how the occlusion cues are combined with binocular stereo and motion parallax cues. PMID:27618514

  5. Imaging and 3D morphological analysis of collagen fibrils.

    PubMed

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants.

  6. Depth discrimination from occlusions in 3D clutter.

    PubMed

    Langer, Michael S; Zheng, Haomin; Rezvankhah, Shayan

    2016-09-01

    Objects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded. Occlusions provide depth information, but it is unknown whether visual systems use occlusion cues to aid depth perception in 3D clutter, as previous studies have addressed occlusions for simple scene geometries only. Here, we present a set of depth discrimination experiments that examine depth from occlusion cues in 3D clutter, and how these cues interact with stereo and motion parallax. We identify two probabilistic occlusion cues. The first is based on the fraction of an object that is visible. The second is based on the depth range of the occluders. We show that human observers use both of these occlusion cues. We also define ideal observers that are based on these occlusion cues. Human observer performance is close to ideal using the visibility cue but far from ideal using the range cue. A key reason for the latter is that the range cue depends on depth estimation of the clutter itself which is unreliable. Our results provide new fundamental constraints on the depth information that is available from occlusions in 3D clutter, and how the occlusion cues are combined with binocular stereo and motion parallax cues.

  7. Low complexity mode decision for 3D-HEVC.

    PubMed

    Zhang, Qiuwen; Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  8. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    , -y, -z coordinates, down-hole depth, and stratigraphic information are available. 4) We grouped stratigraphic units into four main layers based on analysis of geological settings of the modeling area. The stratigraphic units extend from Quaternary, Cretaceous, Carboniferous to Devonian. In order to facilitate the determination of each unit boundaries, a series of standard code was used to integrate data with different descriptive attributes. 5) The Quaternary and Cretaceous units are characterized by subhorizontal layers. Kriging interpolation was processed to the borehole data in order to estimate data distribution and surface relief for the layers. 6) The Carboniferous and Devonian units are folded. The lack of software support, concerning simulating folds and the shallow depth of boreholes and cross sections constrained the determination of geological boundaries. A strategy of digitalizing the fold surfaces from cross sections and establishing them as inclined strata was followed. The modeling was simply subdivided into two steps. The first step consisted of importing data into the modeling software. The second step involved the construction of subhorizontal layers and folds, which were constrained by geological maps, cross sections and outcrops. The construction of the 3D stratigraphic model is of high relevance to further simulation and application, such as 1) lithological modeling; 2) answering simple questions such as "At which unit is the water table?" and calculating volume of groundwater storage during assessment of aquifer vulnerability to contamination; and 3) assigned by geotechnical properties in grids and providing them for user required application. Acknowledgements: Borehole data is kindly provided by the Municipality of Aachen. References: 1. Janet T. Watt, Jonathan M.G. Glen, David A. John and David A. Ponce (2007) Three-dimensional geologic model of the northern Nevada rift and the Beowawe geothermal system, north-central Nevada. Geosphere, v. 3

  9. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  10. Stabilization of gaze during circular locomotion in light. I. Compensatory head and eye nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. A rhesus and cynomolgus monkey were trained to run around the perimeter of a circular platform in light. We call this "circular locomotion" because forward motion had an angular component. Head and body velocity in space were recorded with angular rate sensors and eye movements with electrooculography (EOG). From these measurements we derived signals related to the angular velocity of the eyes in the head (Eh), of the head on the body (Hb), of gaze on the body (Gb), of the body in space (Bs), of gaze in space (Gs), and of the gain of gaze (Gb/Bs). 2. The monkeys had continuous compensatory nystagmus of the head and eyes while running, which stabilized Gs during the slow phases. The eyes established and maintained compensatory gaze velocities at the beginning and end of the slow phases. The head contributed to gaze velocity during the middle of the slow phases. Slow phase Gb was as high as 250 degrees/s, and targets were fixed for gaze angles as large as 90-140 degrees. 3. Properties of the visual surround affected both the gain and strategy of gaze compensation in the one monkey tested. Gains of Eh ranged from 0.3 to 1.1 during compensatory gaze nystagmus. Gains of Hb varied around 0.3 (0.2-0.7), building to a maximum as Eh dropped while running past sectors of interest. Consistent with predictions, gaze gains varied from below to above unity, when translational and angular body movements with regard to the target were in opposite or the same directions, respectively. 4. Gaze moved in saccadic shifts in the direction of running during quick phases. Most head quick phases were small, and at times the head only paused during an eye quick phase. Eye quick phases were larger, ranging up to 60 degrees. This is larger than quick phases during passive rotation or saccades made with the head fixed. 5. These data indicate that head and eye nystagmus are natural phenomena that support gaze compensation during locomotion. Despite differential utilization of the head and

  11. 3D kinematics using dual quaternions: theory and applications in neuroscience

    PubMed Central

    Leclercq, Guillaume; Lefèvre, Philippe; Blohm, Gunnar

    2013-01-01

    In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications. PMID:23443667

  12. A Comparison of Model-Based (2D) and Design-Based (3D) Stereological Methods for Estimating Cell Number in the Substantia Nigra pars compacta of the C57BL/6J Mouse

    PubMed Central

    Baquet, Zachary C.; Williams, Daron; Brody, Joel; Smeyne, Richard J.

    2009-01-01

    The substantia nigra pars compacta (SNpc) is a compact brain structure that contains a variable distribution of cells in both medial to lateral and rostral to caudal dimensions. The SNpc is the primary brain structure affected in Parkinson’s disease, where loss of dopaminergic neurons is one of the major hallmarks of the disorder. Neurotoxic and genetic models of Parkinson’s disease, as well as mechanisms to treat this disorder, are modeled in the mouse. To accurately assess the validity of a model, one needs to be assured that the method(s) of analysis is accurate. Here, we determine the total number of dopaminergic neurons in the SNpc of the C57BL/6J mouse by serial reconstruction then compared that value to estimates derived using model-based stereology and design-based stereology. Serial reconstruction of the SNpc revealed the total number of SNpc dopaminergic neurons to be 8305±540 (SEM). We compared this empirically derived neuron number to model based and design-based stereological estimates. We found that model based estimates gave a value of 8002±91 (SEM) while design-based estimates were 8716±338 (SEM). Statistical analysis showed no significant difference between estimates generated using model- or design-based stereological methods compared to empirically-derived counts using serial reconstruction. PMID:19376196

  13. Humans Have an Expectation That Gaze Is Directed Toward Them

    PubMed Central

    Mareschal, Isabelle; Calder, Andrew J.; Clifford, Colin W.G.

    2013-01-01

    Summary Many animals use cues from another animal’s gaze to help distinguish friend from foe [1–3]. In humans, the direction of someone’s gaze provides insight into their focus of interest and state of mind [4] and there is increasing evidence linking abnormal gaze behaviors to clinical conditions such as schizophrenia and autism [5–11]. This fundamental role of another’s gaze is buoyed by the discovery of specific brain areas dedicated to encoding directions of gaze in faces [12–14]. Surprisingly, however, very little is known about how others’ direction of gaze is interpreted. Here we apply a Bayesian framework that has been successfully applied to sensory and motor domains [15–19] to show that humans have a prior expectation that other people’s gaze is directed toward them. This expectation dominates perception when there is high uncertainty, such as at night or when the other person is wearing sunglasses. We presented participants with synthetic faces viewed under high and low levels of uncertainty and manipulated the faces by adding noise to the eyes. Then, we asked the participants to judge relative gaze directions. We found that all participants systematically perceived the noisy gaze as being directed more toward them. This suggests that the adult nervous system internally represents a prior for gaze and highlights the importance of experience in developing our interpretation of another’s gaze. PMID:23562265

  14. Humans have an expectation that gaze is directed toward them.

    PubMed

    Mareschal, Isabelle; Calder, Andrew J; Clifford, Colin W G

    2013-04-22

    Many animals use cues from another animal's gaze to help distinguish friend from foe. In humans, the direction of someone's gaze provides insight into their focus of interest and state of mind and there is increasing evidence linking abnormal gaze behaviors to clinical conditions such as schizophrenia and autism. This fundamental role of another's gaze is buoyed by the discovery of specific brain areas dedicated to encoding directions of gaze in faces. Surprisingly, however, very little is known about how others' direction of gaze is interpreted. Here we apply a Bayesian framework that has been successfully applied to sensory and motor domains to show that humans have a prior expectation that other people's gaze is directed toward them. This expectation dominates perception when there is high uncertainty, such as at night or when the other person is wearing sunglasses. We presented participants with synthetic faces viewed under high and low levels of uncertainty and manipulated the faces by adding noise to the eyes. Then, we asked the participants to judge relative gaze directions. We found that all participants systematically perceived the noisy gaze as being directed more toward them. This suggests that the adult nervous system internally represents a prior for gaze and highlights the importance of experience in developing our interpretation of another's gaze.

  15. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.