Science.gov

Sample records for 3d gaze estimation

  1. Head pose free 3D gaze estimation using RGB-D camera

    NASA Astrophysics Data System (ADS)

    Kacete, Amine; Séguier, Renaud; Collobert, Michel; Royan, Jérôme

    2017-02-01

    In this paper, we propose an approach for 3D gaze estimation under head pose variation using RGB-D camera. Our method uses a 3D eye model to determine the 3D optical axis and infer the 3D visual axis. For this, we estimate robustly user head pose parameters and eye pupil locations with an ensembles of randomized trees trained with an important annotated training sets. After projecting eye pupil locations in the sensor coordinate system using the sensor intrinsic parameters and a one-time simple calibration by gazing a known 3D target under different directions, the 3D eyeball centers are determined for a specific user for both eyes yielding the determination of the visual axis. Experimental results demonstrate that our method shows a good gaze estimation accuracy even if the environment is highly unconstrained namely large user-sensor distances (> 1m50) unlike state-of-the-art methods which deal with relatively small distances (<1m).

  2. Estimating 3D gaze in physical environment: a geometric approach on consumer-level remote eye tracker

    NASA Astrophysics Data System (ADS)

    Wibirama, Sunu; Mahesa, Rizki R.; Nugroho, Hanung A.; Hamamoto, Kazuhiko

    2017-02-01

    Remote eye trackers with consumer price have been used for various applications on flat computer screen. On the other hand, 3D gaze tracking in physical environment has been useful for visualizing gaze behavior, robots controller, and assistive technology. Instead of using affordable remote eye trackers, 3D gaze tracking in physical environment has been performed using corporate-level head mounted eye trackers, limiting its practical usage to niche user. In this research, we propose a novel method to estimate 3D gaze using consumer-level remote eye tracker. We implement geometric approach to obtain 3D point of gaze from binocular lines-of-sight. Experimental results show that the proposed method yielded low errors of 3.47+/-3.02 cm, 3.02+/-1.34 cm, and 2.57+/-1.85 cm in X, Y , and Z dimensions, respectively. The proposed approach may be used as a starting point for designing interaction method in 3D physical environment.

  3. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  4. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces.

    PubMed

    Abbott, W W; Faisal, A A

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s(-1), more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark--the control of the video arcade game 'Pong'.

  5. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  6. 3D recovery of human gaze in natural environments

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Santner, Katrin; Fritz, Gerald; Mayer, Heinz

    2013-01-01

    The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.

  7. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  8. A kinematic model for 3-D head-free gaze-shifts

    PubMed Central

    Daemi, Mehdi; Crawford, J. Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  9. Coordination of gaze and hand movements for tracking and tracing in 3D.

    PubMed

    Gielen, Constantinus C A M; Dijkstra, Tjeerd M H; Roozen, Irene J; Welten, Joke

    2009-03-01

    In this study we have investigated movements in three-dimensional space. Since most studies have investigated planar movements (like ellipses, cloverleaf shapes and "figure eights") we have compared two generalizations of the two-thirds power law to three dimensions. In particular we have tested whether the two-thirds power law could be best described by tangential velocity and curvature in a plane (compatible with the idea of planar segmentation) or whether tangential velocity and curvature should be calculated in three dimensions. We defined total curvature in three dimensions as the square root of the sum of curvature squared and torsion squared. The results demonstrate that most of the variance is explained by tangential velocity and total curvature. This indicates that all three orthogonal components of movements in 3D are equally important and that movements are truly 3D and do not reflect a concatenation of 2D planar movement segments. In addition, we have studied the coordination of eye and hand movements in 3D by measuring binocular eye movements while subjects move the finger along a curved path. The results show that the directional component and finger position almost superimpose when subjects track a target moving in 3D. However, the vergence component of gaze leads finger position by about 250msec. For drawing (tracing) the path of a visible 3D shape, the directional component of gaze leads finger position by about 225msec, and the vergence component leads finger position by about 400msec. These results are compatible with the idea that gaze leads hand position during drawing movement to assist prediction and planning of hand position in 3D space.

  10. Perception can influence the vergence responses associated with open-loop gaze shifts in 3D

    PubMed Central

    Sheliga, B. M.; Miles, F. A.

    2006-01-01

    We sought to determine if perceived depth can elicit vergence eye movements independent of binocular disparity. A flat surface in the frontal plane appears slanted about a vertical axis when the image in one eye is vertically compressed relative to the image in the other eye: the induced size effect (Ogle, 1938). We show that vergence eye movements accompany horizontal gaze shifts across such surfaces, consistent with the direction of the perceived slant, despite the absence of a horizontal disparity gradient. All images were extinguished during the gaze shifts so that eye movements were executed open-loop. We also used vertical compression of one eye’s image to null the perceived slant resulting from prior horizontal compression of that image, and show that this reduces the vergence accompanying horizontal gaze shifts across the surface, even though the horizontal disparity is unchanged. When this last experiment was repeated using vertical expansions in place of the vertical compressions, the perceived slant was increased and so too was the vergence accompanying horizontal gaze shifts, although the horizontal disparity again remained unchanged. We estimate that the perceived depth accounted, on average, for 15–41% of the vergence in our experiments depending on the conditions. PMID:14765951

  11. Motion estimation in the 3-D Gabor domain.

    PubMed

    Feng, Mu; Reed, Todd R

    2007-08-01

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence.

  12. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  13. Augmented saliency model using automatic 3D head pose detection and learned gaze following in natural scenes.

    PubMed

    Parks, Daniel; Borji, Ali; Itti, Laurent

    2015-11-01

    Previous studies have shown that gaze direction of actors in a scene influences eye movements of passive observers during free-viewing (Castelhano, Wieth, & Henderson, 2007; Borji, Parks, & Itti, 2014). However, no computational model has been proposed to combine bottom-up saliency with actor's head pose and gaze direction for predicting where observers look. Here, we first learn probability maps that predict fixations leaving head regions (gaze following fixations), as well as fixations on head regions (head fixations), both dependent on the actor's head size and pose angle. We then learn a combination of gaze following, head region, and bottom-up saliency maps with a Markov chain composed of head region and non-head region states. This simple structure allows us to inspect the model and make comments about the nature of eye movements originating from heads as opposed to other regions. Here, we assume perfect knowledge of actor head pose direction (from an oracle). The combined model, which we call the Dynamic Weighting of Cues model (DWOC), explains observers' fixations significantly better than each of the constituent components. Finally, in a fully automatic combined model, we replace the oracle head pose direction data with detections from a computer vision model of head pose. Using these (imperfect) automated detections, we again find that the combined model significantly outperforms its individual components. Our work extends the engineering and scientific applications of saliency models and helps better understand mechanisms of visual attention.

  14. Unsupervised partial volume estimation using 3D and statistical priors

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-07-01

    Our main objective is to compute the volume of interest of images from magnetic resonance imaging (MRI). We suggest a method based on maximum a posteriori. Using texture models, we propose a new partial volume determination. We model tissues using generalized gaussian distributions fitted from a mixture of their gray levels and texture information. Texture information relies on estimation errors from multiresolution and multispectral autoregressive models. A uniform distribution solves large estimation errors, when dealing with unknown tissues. An initial segmentation, needed by the multiresolution segmentation deterministic relaxation algorithm, is found using an anatomical atlas. To model the a priori information, we use a full 3-D extension of Markov random fields. Our 3-D extension is straightforward, easily implemented, and includes single label probability. Using initial segmentation map and initial tissues models, iterative updates are made on the segmentation map and tissue models. Updating tissue models remove field inhomogeneities. Partial volumes are computed from final segmentation map and tissue models. Preliminary results are encouraging.

  15. Depth estimation from multiple coded apertures for 3D interaction

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  16. Fast human pose estimation using 3D Zernike descriptors

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  17. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  18. A new gaze estimation method considering external light.

    PubMed

    Lee, Jong Man; Lee, Hyeon Chang; Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Cho, Chul Woo; Park, Kang Ryoung; Kim, Hyun-Cheol; Cha, Jihun

    2015-03-11

    Gaze tracking systems usually utilize near-infrared (NIR) lights and NIR cameras, and the performance of such systems is mainly affected by external light sources that include NIR components. This is ascribed to the production of additional (imposter) corneal specular reflection (SR) caused by the external light, which makes it difficult to discriminate between the correct SR as caused by the NIR illuminator of the gaze tracking system and the imposter SR. To overcome this problem, a new method is proposed for determining the correct SR in the presence of external light based on the relationship between the corneal SR and the pupil movable area with the relative position of the pupil and the corneal SR. The experimental results showed that the proposed method makes the gaze tracking system robust to the existence of external light.

  19. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  20. Gaze estimation for off-angle iris recognition based on the biometric eye model

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Barstow, Del; Santos-Villalobos, Hector; Thompson, Joseph; Bolme, David; Boehnen, Christopher

    2013-05-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  1. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    SciTech Connect

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J; Thompson, Joseph W; Bolme, David S; Boehnen, Chris Bensing

    2013-01-01

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.

  2. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  3. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    SciTech Connect

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysis sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.

  4. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  5. Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis.

    PubMed

    Lu, Feng; Sugano, Yusuke; Okabe, Takahiro; Sato, Yoichi

    2015-11-01

    In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness.

  6. Vision-Based 3D Motion Estimation for On-Orbit Proximity Satellite Tracking and Navigation

    DTIC Science & Technology

    2015-06-01

    printed using the Fortus 400mc 3D rapid- prototyping printer of the NPS Space Systems Academic Group, while the internal structure is made of aluminum...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited VISION-BASED 3D ...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISION-BASED 3D MOTION ESTIMATION FOR ON-ORBIT PROXIMITY SATELLITE TRACKING

  7. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  8. 3-D, bluff body drag estimation using a Green's function/Gram-Charlier series approach.

    SciTech Connect

    Barone, Matthew Franklin; De Chant, Lawrence Justin

    2004-05-01

    In this study, we describe the extension of the 2-d preliminary design bluff body drag estimation tool developed by De Chant to apply for 3-d flows. As with the 2-d method, the 3-d extension uses a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Whereas, the 2-d methodology relied solely upon the use of small disturbance theory for the inviscid flow field associated with the body of interest to estimate the near-field initial conditions, e.g. velocity defect, the 3-d methodology uses both analytical (where available) and numerical inviscid solutions. The defect solution is then used as an initial condition in an approximate 3-d Green's function solution. Finally, the Green's function solution is matched to the 3-d analog of the classical 2-d Gram-Charlier series and then integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed are of accuracy equivalent to the 2-d method for flows with large separation, i.e. less than 20% relative error. As was the lower dimensional method, the 3-d concept is intended to be a supplement to turbulent Navier-Stokes and experimental solution for estimating drag coefficients over blunt bodies.

  9. 3-D, bluff body drag estimation using a Green's function/Gram-Charlier series approach.

    SciTech Connect

    Barone, Matthew Franklin; De Chant, Lawrence Justin

    2005-01-01

    In this study, we describe the extension of the 2-d preliminary design bluff body drag estimation tool developed by De Chant1 to apply for 3-d flows. As with the 2-d method, the 3-d extension uses a combined approximate Green's function/Gram-Charlier series approach to retain the body geometry information. Whereas, the 2-d methodology relied solely upon the use of small disturbance theory for the inviscid flow field associated with the body of interest to estimate the near-field initial conditions, e.g. velocity defect, the 3-d methodology uses both analytical (where available) and numerical inviscid solutions. The defect solution is then used as an initial condition in an approximate 3-d Green's function solution. Finally, the Green's function solution is matched to the 3-d analog of the classical 2-d Gram-Charlier series and then integrated to yield the net form drag on the bluff body. Preliminary results indicate that drag estimates computed are of accuracy equivalent to the 2-d method for flows with large separation, i.e. less than 20% relative error. As was the lower dimensional method, the 3-d concept is intended to be a supplement to turbulent Navier-Stokes and experimental solution for estimating drag coefficients over blunt bodies.

  10. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods

    NASA Astrophysics Data System (ADS)

    van Velden, Floris H. P.; Kloet, Reina W.; van Berckel, Bart N. M.; Wolfensberger, Saskia P. A.; Lammertsma, Adriaan A.; Boellaard, Ronald

    2008-06-01

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  11. Estimation of the degree of polarization in low-light 3D integral imaging

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2016-06-01

    The calculation of the Stokes Parameters and the Degree of Polarization in 3D integral images requires a careful manipulation of the polarimetric elemental images. This fact is particularly important if the scenes are taken in low-light conditions. In this paper, we show that the Degree of Polarization can be effectively estimated even when elemental images are recorded with few photons. The original idea was communicated in [A. Carnicer and B. Javidi, "Polarimetric 3D integral imaging in photon-starved conditions," Opt. Express 23, 6408-6417 (2015)]. First, we use the Maximum Likelihood Estimation approach for generating the 3D integral image. Nevertheless, this method produces very noisy images and thus, the degree of polarization cannot be calculated. We suggest using a Total Variation Denoising filter as a way to improve the quality of the generated 3D images. As a result, noise is suppressed but high frequency information is preserved. Finally, the degree of polarization is obtained successfully.

  12. Recursive estimation of 3D motion and surface structure from local affine flow parameters.

    PubMed

    Calway, Andrew

    2005-04-01

    A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.

  13. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties.

    PubMed

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B

    2016-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

  14. Event-Based 3D Motion Flow Estimation Using 4D Spatio Temporal Subspaces Properties

    PubMed Central

    Ieng, Sio-Hoi; Carneiro, João; Benosman, Ryad B.

    2017-01-01

    State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance—sampled at the frame rate of the cameras—as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented. PMID:28220057

  15. Building Proteins in a Day: Efficient 3D Molecular Structure Estimation with Electron Cryomicroscopy.

    PubMed

    Punjani, Ali; Brubaker, Marcus A; Fleet, David J

    2017-04-01

    Discovering the 3D atomic-resolution structure of molecules such as proteins and viruses is one of the foremost research problems in biology and medicine. Electron Cryomicroscopy (cryo-EM) is a promising vision-based technique for structure estimation which attempts to reconstruct 3D atomic structures from a large set of 2D transmission electron microscope images. This paper presents a new Bayesian framework for cryo-EM structure estimation that builds on modern stochastic optimization techniques to allow one to scale to very large datasets. We also introduce a novel Monte-Carlo technique that reduces the cost of evaluating the objective function during optimization by over five orders of magnitude. The net result is an approach capable of estimating 3D molecular structure from large-scale datasets in about a day on a single CPU workstation.

  16. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  17. Cervical vertebrae maturation index estimates on cone beam CT: 3D reconstructions vs sagittal sections

    PubMed Central

    Bonfim, Marco A E; Costa, André L F; Ximenez, Michel E L; Cotrim-Ferreira, Flávio A; Ferreira-Santos, Rívea I

    2016-01-01

    Objectives: The aim of this study was to evaluate the performance of CBCT three-dimensional (3D) reconstructions and sagittal sections for estimates of cervical vertebrae maturation index (CVMI). Methods: The sample consisted of 72 CBCT examinations from patients aged 8–16 years (45 females and 27 males) selected from the archives of two private clinics. Two calibrated observers (kappa scores: ≥0.901) interpreted the CBCT settings twice. Intra- and interobserver agreement for both imaging exhibition modes was analyzed by kappa statistics, which was also used to analyze the agreement between 3D reconstructions and sagittal sections. Correlations between cervical vertebrae maturation estimates and chronological age, as well as between the assessments by 3D reconstructions and sagittal sections, were analyzed using gamma Goodman–Kruskal coefficients (α = 0.05). Results: The kappa scores evidenced almost perfect agreement between the first and second assessments of the cervical vertebrae by 3D reconstructions (0.933–0.983) and sagittal sections (0.983–1.000). Similarly, the agreement between 3D reconstructions and sagittal sections was almost perfect (kappa index: 0.983). In most divergent cases, the difference between 3D reconstructions and sagittal sections was one stage of CVMI. Strongly positive correlations (>0.8, p < 0.001) were found not only between chronological age and CVMI but also between the estimates by 3D reconstructions and sagittal sections (p < 0.001). Conclusions: Although CBCT imaging must not be used exclusively for this purpose, it may be suitable for skeletal maturity assessments. PMID:26509559

  18. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  19. On the Estimation of Forest Resources Using 3D Remote Sensing Techniques and Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Karjalainen, Mika; Karila, Kirsi; Liang, Xinlian; Yu, Xiaowei; Huang, Guoman; Lu, Lijun

    2016-08-01

    In recent years, 3D capable remote sensing techniques have shown great potential in forest biomass estimation because of their ability to measure the forest canopy structure, tree height and density. The objective of the Dragon3 forest resources research project (ID 10667) and the supporting ESA young scientist project (ESA contract NO. 4000109483/13/I-BG) was to study the use of satellite based 3D techniques in forest tree height estimation, and consequently in forest biomass and biomass change estimation, by combining satellite data with terrestrial measurements. Results from airborne 3D techniques were also used in the project. Even though, forest tree height can be estimated from 3D satellite SAR data to some extent, there is need for field reference plots. For this reason, we have also been developing automated field plot measurement techniques based on Terrestrial Laser Scanning data, which can be used to train and calibrate satellite based estimation models. In this paper, results of canopy height models created from TerraSAR-X stereo and TanDEM-X INSAR data are shown as well as preliminary results from TLS field plot measurement system. Also, results from the airborne CASMSAR system to measure forest canopy height from P- and X- band INSAR are presented.

  20. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  1. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  2. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  3. 2-D Versus 3-D Cross-Correlation-Based Radial and Circumferential Strain Estimation Using Multiplane 2-D Ultrafast Ultrasound in a 3-D Atherosclerotic Carotid Artery Model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-10-01

    Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.

  4. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  5. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  6. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    PubMed Central

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Cai, Weixing; Rottmann, Joerg; Li, Ruijiang; Williams, Christopher; Wagar, Matthew; Berbeco, Ross; Ionascu, Dan; Lewis, John H.

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we develop and perform initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and use these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparing to ground truth digital and physical phantom images. The performance of 4DCBCT- and 4DCT- based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms, and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. PMID:25905722

  7. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation.

    PubMed

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed; Pihl, Michael Johannes; Hansen, Kristoffer Lindskov; Stuart, Matthias Bo; Thomsen, Carsten; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-03-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames/s in a plane, and was used to estimate 3-D vector flow in a cross-sectional image plane. The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared with the expected 79.8 L/min, and to 2.68 ± 0.04 mL/stroke in the pulsating environment compared with the expected 2.57 ± 0.08 mL/stroke. Flow rates estimated in the common carotid artery of a healthy volunteer are compared with magnetic resonance imaging (MRI) measured flow rates using a 1-D through-plane velocity sequence. Mean flow rates were 333 ± 31 mL/min for the presented method and 346 ± 2 mL/min for the MRI measurements.

  8. Estimating 3D tilt from local image cues in natural scenes

    PubMed Central

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702

  9. 2D versus 3D cross-correlation-based radial and circumferential strain estimation using multiplane 2D ultrafast ultrasound in a 3D atherosclerotic carotid artery model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-08-25

    Three-dimensional strain estimation might improve the detection and localization of high strain regions in the carotid artery for identification of vulnerable plaques. This study compares 2D vs. 3D displacement estimation in terms of radial and circumferential strain using simulated ultrasound images of a patient specific 3D atherosclerotic carotid artery model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on literature data. A Philips L11-3 linear array transducer was simulated which transmitted plane waves at 3 alternating angles at a pulse repetition rate of 10 kHz. Inter-frame radiofrequency ultrasound data were simulated in Field II for 191 equally spaced longitudinal positions of the internal carotid artery. Accumulated radial and circumferential displacements were estimated using tracking of the inter-frame displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2D and 3D method was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3D displacement estimation for the entire cardiac cycle. The 3D technique clearly outperformed the 2D technique in phases with high inter-frame longitudinal motion. In fact the large inter-frame longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2D technique.

  10. Fine-Scale Population Estimation by 3D Reconstruction of Urban Residential Buildings

    PubMed Central

    Wang, Shixin; Tian, Ye; Zhou, Yi; Liu, Wenliang; Lin, Chenxi

    2016-01-01

    Fine-scale population estimation is essential in emergency response and epidemiological applications as well as urban planning and management. However, representing populations in heterogeneous urban regions with a finer resolution is a challenge. This study aims to obtain fine-scale population distribution based on 3D reconstruction of urban residential buildings with morphological operations using optical high-resolution (HR) images from the Chinese No. 3 Resources Satellite (ZY-3). Specifically, the research area was first divided into three categories when dasymetric mapping was taken into consideration. The results demonstrate that the morphological building index (MBI) yielded better results than built-up presence index (PanTex) in building detection, and the morphological shadow index (MSI) outperformed color invariant indices (CIIT) in shadow extraction and height retrieval. Building extraction and height retrieval were then combined to reconstruct 3D models and to estimate population. Final results show that this approach is effective in fine-scale population estimation, with a mean relative error of 16.46% and an overall Relative Total Absolute Error (RATE) of 0.158. This study gives significant insights into fine-scale population estimation in complicated urban landscapes, when detailed 3D information of buildings is unavailable. PMID:27775670

  11. Estimation of line dimensions in 3D direct laser writing lithography

    NASA Astrophysics Data System (ADS)

    Guney, M. G.; Fedder, G. K.

    2016-10-01

    Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.

  12. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  13. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture.

    PubMed

    Xu, Gang; Xing, Mengdao; Xia, Xiang-Gen; Zhang, Lei; Chen, Qianqian; Bao, Zheng

    2016-05-01

    In the current scenario of high-resolution inverse synthetic aperture radar (ISAR) imaging, the non-cooperative targets may have strong maneuverability, which tends to cause time-variant Doppler modulation and imaging plane in the echoed data. Furthermore, it is still a challenge to realize ISAR imaging of maneuvering targets from sparse aperture (SA) data. In this paper, we focus on the problem of 3D geometry and motion estimations of maneuvering targets for interferometric ISAR (InISAR) with SA. For a target of uniformly accelerated rotation, the rotational modulation in echo is formulated as chirp sensing code under a chirp-Fourier dictionary to represent the maneuverability. In particular, a joint multi-channel imaging approach is developed to incorporate the multi-channel data and treat the multi-channel ISAR image formation as a joint-sparsity constraint optimization. Then, a modified orthogonal matching pursuit (OMP) algorithm is employed to solve the optimization problem to produce high-resolution range-Doppler (RD) images and chirp parameter estimation. The 3D target geometry and the motion estimations are followed by using the acquired RD images and chirp parameters. Herein, a joint estimation approach of 3D geometry and rotation motion is presented to realize outlier removing and error reduction. In comparison with independent single-channel processing, the proposed joint multi-channel imaging approach performs better in 2D imaging, 3D imaging, and motion estimation. Finally, experiments using both simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

  14. Strain estimation in 3D by fitting linear and planar data to the March model

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Talbot, Christopher J.

    2016-08-01

    The probability density function associated with the March model is derived and used in a maximum likelihood method to estimate the best fit distribution and 3D strain parameters for a given set of linear or planar data. Typically it is assumed that in the initial state (pre-strain) linear or planar data are uniformly distributed on the sphere which means the number of strain parameters estimated needs to be reduced so that the numerical technique succeeds. Essentially this requires that the data are rotated into a suitable reference frame prior to analysis. The method has been applied to a suitable example from the Dalradian of SW Scotland and results obtained are consistent with those from an independent method of strain analysis. Despite March theory having been incorporated deep into the fabric of geological strain analysis, its full potential as a simple direct 3D strain analytical tool has not been achieved. The method developed here may help remedy this situation.

  15. Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.

    PubMed

    Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L

    2015-03-01

    Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.

  16. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  17. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  18. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances. PMID:23235454

  19. Detecting and estimating errors in 3D restoration methods using analog models.

    NASA Astrophysics Data System (ADS)

    José Ramón, Ma; Pueyo, Emilio L.; Briz, José Luis

    2015-04-01

    Some geological scenarios may be important for a number of socio-economic reasons, such as water or energy resources, but the available underground information is often limited, scarce and heterogeneous. A truly 3D reconstruction, which is still necessary during the decision-making process, may have important social and economic implications. For this reason, restoration methods were developed. By honoring some geometric or mechanical laws, they help build a reliable image of the subsurface. Pioneer methods were firstly applied in 2D (balanced and restored cross-sections) during the sixties and seventies. Later on, and due to the improvements of computational capabilities, they were extended to 3D. Currently, there are some academic and commercial restoration solutions; Unfold by the Université de Grenoble, Move by Midland Valley Exploration, Kine3D (on gOcad code) by Paradigm, Dynel3D by igeoss-Schlumberger. We have developed our own restoration method, Pmag3Drest (IGME-Universidad de Zaragoza), which is designed to tackle complex geometrical scenarios using paleomagnetic vectors as a pseudo-3D indicator of deformation. However, all these methods have limitations based on the assumptions they need to establish. For this reason, detecting and estimating uncertainty in 3D restoration methods is of key importance to trust the reconstructions. Checking the reliability and the internal consistency of every method, as well as to compare the results among restoration tools, is a critical issue never tackled so far because of the impossibility to test out the results in Nature. To overcome this problem we have developed a technique using analog models. We built complex geometric models inspired in real cases of superposed and/or conical folding at laboratory scale. The stratigraphic volumes were modeled using EVA sheets (ethylene vinyl acetate). Their rheology (tensile and tear strength, elongation, density etc) and thickness can be chosen among a large number of values

  20. Maximum likelihood estimation of parameterized 3-D surfaces using a moving camera

    NASA Technical Reports Server (NTRS)

    Hung, Y.; Cernuschi-Frias, B.; Cooper, D. B.

    1987-01-01

    A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A surface of interest here is modeled as a 3-D function known up to the values of a few parameters. The approach will work with any parameterization. However, in work to date researchers have modeled objects as patches of spheres, cylinders, and planes - primitive objects. These primitive surfaces are special cases of 3-D quadric surfaces. Primitive surface estimation is treated as the general problem of maximum likelihood parameter estimation based on two or more functionally related data sets. In the present case, these data sets constitute a sequence of images taken at different locations and orientations. A simple geometric explanation is given for the estimation algorithm. Though various techniques can be used to implement this nonlinear estimation, researches discuss the use of gradient descent. Experiments are run and discussed for the case of a sphere of unknown location. These experiments graphically illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. Researchers introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and amount of processing required.

  1. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  2. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation.

    PubMed

    Matei, Bogdan; Shan, Ying; Sawhney, Harpreet S; Tan, Yi; Kumar, Rakesh; Huber, Daniel; Hebert, Martial

    2006-07-01

    We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the Locality Sensitive Hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query.

  3. An improved 3-D Look--Locker imaging method for T(1) parameter estimation.

    PubMed

    Nkongchu, Ken; Santyr, Giles

    2005-09-01

    The 3-D Look-Locker (LL) imaging method has been shown to be a highly efficient and accurate method for the volumetric mapping of the spin lattice relaxation time T(1). However, conventional 3-D LL imaging schemes are typically limited to small tip angle RF pulses (estimation. In this work, a more generalized form of the 3-D LL imaging method that incorporates an additional and variable delay time between recovery samples is described, which permits the use of larger tip angles (>5 degrees ), thereby improving the SNR and the accuracy of the method. In phantom studies, a mean T(1) measurement accuracy of less than 2% (0.2-3.1%) using a tip angle of 10 degrees was obtained for a range of T(1) from approximately 300 to 1,700 ms with a measurement time increase of only 15%. This accuracy compares favorably with the conventional 3-D LL method that provided an accuracy between 2.2% and 7.3% using a 5 degrees flip angle.

  4. Robust ego-motion estimation and 3-D model refinement using surface parallax.

    PubMed

    Agrawal, Amit; Chellappa, Rama

    2006-05-01

    We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm.

  5. Parametric estimation of 3D tubular structures for diffuse optical tomography

    PubMed Central

    Larusson, Fridrik; Anderson, Pamela G.; Rosenberg, Elizabeth; Kilmer, Misha E.; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L.

    2013-01-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction. PMID:23411913

  6. 3D position estimation using a single coil and two magnetic field sensors.

    PubMed

    Tadayon, P; Staude, G; Felderhoff, T

    2015-01-01

    This paper presents an algorithm which enables the estimation of relative 3D position of a sensor module with two magnetic sensors with respect to a magnetic field source using a single transmitting coil. Starting with the description of the ambiguity problem caused by using a single coil, a system concept comprising two sensors having a fixed spatial relation to each other is introduced which enables the unique determination of the sensors' position in 3D space. For this purpose, an iterative two-step algorithm is presented: In a first step, the data of one sensor is used to limit the number of possible position solutions. In a second step, the spatial relation between the sensors is used to determine the correct sensor position.

  7. Parametric estimation of 3D tubular structures for diffuse optical tomography.

    PubMed

    Larusson, Fridrik; Anderson, Pamela G; Rosenberg, Elizabeth; Kilmer, Misha E; Sassaroli, Angelo; Fantini, Sergio; Miller, Eric L

    2013-02-01

    We explore the use of diffuse optical tomography (DOT) for the recovery of 3D tubular shapes representing vascular structures in breast tissue. Using a parametric level set method (PaLS) our method incorporates the connectedness of vascular structures in breast tissue to reconstruct shape and absorption values from severely limited data sets. The approach is based on a decomposition of the unknown structure into a series of two dimensional slices. Using a simplified physical model that ignores 3D effects of the complete structure, we develop a novel inter-slice regularization strategy to obtain global regularity. We report on simulated and experimental reconstructions using realistic optical contrasts where our method provides a more accurate estimate compared to an unregularized approach and a pixel based reconstruction.

  8. 3D Porosity Estimation of the Nankai Trough Sediments from Core-log-seismic Integration

    NASA Astrophysics Data System (ADS)

    Park, J. O.

    2015-12-01

    The Nankai Trough off southwest Japan is one of the best subduction-zone to study megathrust earthquake fault. Historic, great megathrust earthquakes with a recurrence interval of 100-200 yr have generated strong motion and large tsunamis along the Nankai Trough subduction zone. At the Nankai Trough margin, the Philippine Sea Plate (PSP) is being subducted beneath the Eurasian Plate to the northwest at a convergence rate ~4 cm/yr. The Shikoku Basin, the northern part of the PSP, is estimated to have opened between 25 and 15 Ma by backarc spreading of the Izu-Bonin arc. The >100-km-wide Nankai accretionary wedge, which has developed landward of the trench since the Miocene, mainly consists of offscraped and underplated materials from the trough-fill turbidites and the Shikoku Basin hemipelagic sediments. Particularly, physical properties of the incoming hemipelagic sediments may be critical for seismogenic behavior of the megathrust fault. We have carried out core-log-seismic integration (CLSI) to estimate 3D acoustic impedance and porosity for the incoming sediments in the Nankai Trough. For the CLSI, we used 3D seismic reflection data, P-wave velocity and density data obtained during IODP (Integrated Ocean Drilling Program) Expeditions 322 and 333. We computed acoustic impedance depth profiles for the IODP drilling sites from P-wave velocity and density data. We constructed seismic convolution models with the acoustic impedance profiles and a source wavelet which is extracted from the seismic data, adjusting the seismic models to observed seismic traces with inversion method. As a result, we obtained 3D acoustic impedance volume and then converted it to 3D porosity volume. In general, the 3D porosities show decrease with depth. We found a porosity anomaly zone with alteration of high and low porosities seaward of the trough axis. In this talk, we will show detailed 3D porosity of the incoming sediments, and present implications of the porosity anomaly zone for the

  9. Estimability of thrusting trajectories in 3-D from a single passive sensor with unknown launch point

    NASA Astrophysics Data System (ADS)

    Yuan, Ting; Bar-Shalom, Yaakov; Willett, Peter; Ben-Dov, R.; Pollak, S.

    2013-09-01

    The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional (3-D) space using 2-dimensional (2-D) measurements from a single passive sensor is investigated. The location of projectile's launch point (LP) is unavailable and this could significantly affect the performance of the estimation and the IPP. The LP altitude is then an unknown target parameter. The estimability is analyzed based on the Fisher Information Matrix (FIM) of the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient, thrust and the LP altitude, which determine the trajectory according to a nonlinear motion equation. The full rank of the FIM ensures that one has an estimable target parameters. The corresponding Craḿer-Rao lower bound (CRLB) quantifies the estimation performance of the estimator that is statistically efficient and can be used for IPP. In view of the inherent nonlinearity of the problem, the maximum likelihood (ML) estimate of the target parameter vector is found by using a mixed (partially grid-based) search approach. For a selected grid in the drag-coefficient-thrust-altitude subspace, the proposed parallelizable approach is shown to have reliable estimation performance and further leads to the final IPP of high accuracy.

  10. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  11. Spatiotemporal non-rigid image registration for 3D ultrasound cardiac motion estimation

    NASA Astrophysics Data System (ADS)

    Loeckx, D.; Ector, J.; Maes, F.; D'hooge, J.; Vandermeulen, D.; Voigt, J.-U.; Heidbüchel, H.; Suetens, P.

    2007-03-01

    We present a new method to evaluate 4D (3D + time) cardiac ultrasound data sets by nonrigid spatio-temporal image registration. First, a frame-to-frame registration is performed that yields a dense deformation field. The deformation field is used to calculate local spatiotemporal properties of the myocardium, such as the velocity, strain and strain rate. The field is also used to propagate particular points and surfaces, representing e.g. the endo-cardial surface over the different frames. As such, the 4D path of these point is obtained, which can be used to calculate the velocity by which the wall moves and the evolution of the local surface area over time. The wall velocity is not angle-dependent as in classical Doppler imaging, since the 4D data allows calculating the true 3D motion. Similarly, all 3D myocardium strain components can be estimated. Combined they result in local surface area or volume changes which van be color-coded as a measure of local contractability. A diagnostic method that strongly benefits from this technique is cardiac motion and deformation analysis, which is an important aid to quantify the mechanical properties of the myocardium.

  12. Twin-beam real-time position estimation of micro-objects in 3D

    NASA Astrophysics Data System (ADS)

    Gurtner, Martin; Zemánek, Jiří

    2016-12-01

    Various optical methods for measuring positions of micro-objects in 3D have been reported in the literature. Nevertheless, the majority of them are not suitable for real-time operation, which is needed, for example, for feedback position control. In this paper, we present a method for real-time estimation of the position of micro-objects in 3D1; the method is based on twin-beam illumination and requires only a very simple hardware setup whose essential part is a standard image sensor without any lens. The performance of the proposed method is tested during a micro-manipulation task in which the estimated position served as feedback for the controller. The experiments show that the estimate is accurate to within  ∼3 μm in the lateral position and  ∼7 μm in the axial distance with the refresh rate of 10 Hz. Although the experiments are done using spherical objects, the presented method could be modified to handle non-spherical objects as well.

  13. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  14. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  15. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  16. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  17. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  18. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  19. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  20. 3D measurement and camera attitude estimation method based on trifocal tensor

    NASA Astrophysics Data System (ADS)

    Chen, Shengyi; Liu, Haibo; Yao, Linshen; Yu, Qifeng

    2016-11-01

    To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras' positions. Then the camera attitude and the interested points' image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.

  1. On-line 3D motion estimation using low resolution MRI

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2015-08-01

    Image processing such as deformable image registration finds its way into radiotherapy as a means to track non-rigid anatomy. With the advent of magnetic resonance imaging (MRI) guided radiotherapy, intrafraction anatomy snapshots become technically feasible. MRI provides the needed tissue signal for high-fidelity image registration. However, acquisitions, especially in 3D, take a considerable amount of time. Pushing towards real-time adaptive radiotherapy, MRI needs to be accelerated without degrading the quality of information. In this paper, we investigate the impact of image resolution on the quality of motion estimations. Potentially, spatially undersampled images yield comparable motion estimations. At the same time, their acquisition times would reduce greatly due to the sparser sampling. In order to substantiate this hypothesis, exemplary 4D datasets of the abdomen were downsampled gradually. Subsequently, spatiotemporal deformations are extracted consistently using the same motion estimation for each downsampled dataset. Errors between the original and the respectively downsampled version of the dataset are then evaluated. Compared to ground-truth, results show high similarity of deformations estimated from downsampled image data. Using a dataset with {{≤ft(2.5 \\text{mm}\\right)}3} voxel size, deformation fields could be recovered well up to a downsampling factor of 2, i.e. {{≤ft(5 \\text{mm}\\right)}3} . In a therapy guidance scenario MRI, imaging speed could accordingly increase approximately fourfold, with acceptable loss of estimated motion quality.

  2. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  3. Multiview diffeomorphic registration: application to motion and strain estimation from 3D echocardiography.

    PubMed

    Piella, Gemma; De Craene, Mathieu; Butakoff, Constantine; Grau, Vicente; Yao, Cheng; Nedjati-Gilani, Shahrum; Penney, Graeme P; Frangi, Alejandro F

    2013-04-01

    This paper presents a new registration framework for quantifying myocardial motion and strain from the combination of multiple 3D ultrasound (US) sequences. The originality of our approach lies in the estimation of the transformation directly from the input multiple views rather than from a single view or a reconstructed compounded sequence. This allows us to exploit all spatiotemporal information available in the input views avoiding occlusions and image fusion errors that could lead to some inconsistencies in the motion quantification result. We propose a multiview diffeomorphic registration strategy that enforces smoothness and consistency in the spatiotemporal domain by modeling the 4D velocity field continuously in space and time. This 4D continuous representation considers 3D US sequences as a whole, therefore allowing to robustly cope with variations in heart rate resulting in different number of images acquired per cardiac cycle for different views. This contributes to the robustness gained by solving for a single transformation from all input sequences. The similarity metric takes into account the physics of US images and uses a weighting scheme to balance the contribution of the different views. It includes a comparison both between consecutive images and between a reference and each of the following images. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement fields. Registration and strain accuracy were evaluated on synthetic 3D US sequences with known ground truth. Experiments were also conducted on multiview 3D datasets of 8 volunteers and 1 patient treated by cardiac resynchronization therapy. Strain curves obtained from our multiview approach were compared to the single-view case, as well as with other multiview approaches. For healthy cases, the inclusion of several views improved the consistency of the strain curves and reduced the number of segments where a non-physiological strain pattern was

  4. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  5. 3D viscosity maps for Greenland and effect on GRACE mass balance estimates

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Xu, Zheng

    2016-04-01

    The GRACE satellite mission measures mass loss of the Greenland ice sheet. To correct for glacial isostatic adjustment numerical models are used. Although generally found to be a small signal, the full range of possible GIA models has not been explored yet. In particular, low viscosities due to a wet mantle and high temperatures due to the nearby Iceland hotspot could have a significant effect on GIA gravity rates. The goal of this study is to present a range of possible viscosity maps, and investigate the effect on GRACE mass balance estimates. Viscosity is derived using flow laws for olivine. Mantle temperature is computed from global seismology models, based on temperature derivatives for different mantle compositions. An indication for grain sizes is obtained by xenolith findings at a few locations. We also investigate the weakening effect of the presence of melt. To calculate gravity rates, we use a finite-element GIA model with the 3D viscosity maps and the ICE-5G loading history. GRACE mass balances for mascons in Greenland are derived with a least-squares inversion, using separate constraints for the inland and coastal areas in Greenland. Biases in the least-squares inversion are corrected using scale factors estimated from a simulation based on a surface mass balance model (Xu et al., submitted to The Cryosphere). Model results show enhanced gravity rates in the west and south of Greenland with 3D viscosity maps, compared to GIA models with 1D viscosity. The effect on regional mass balance is up to 5 Gt/year. Regional low viscosity can make present-day gravity rates sensitivity to ice thickness changes in the last decades. Therefore, an improved ice loading history for these time scales is needed.

  6. UAV based 3D digital surface model to estimate paleolandscape in high mountainous environment

    NASA Astrophysics Data System (ADS)

    Mészáros, János; Árvai, Mátyás; Kohán, Balázs; Deák, Márton; Nagy, Balázs

    2016-04-01

    reliable results and resolution. Based on the sediment layers of the peat bog together with the generated 3D surface model the paleoenvironment, the largest paleowater level can be reconstructed and we can estimate the dimension of the landslide which created the basin of the peat bog.

  7. An evaluation of 3D head pose estimation using the Microsoft Kinect v2.

    PubMed

    Darby, John; Sánchez, María B; Butler, Penelope B; Loram, Ian D

    2016-07-01

    The Kinect v2 sensor supports real-time non-invasive 3D head pose estimation. Because the sensor is small, widely available and relatively cheap it has great potential as a tool for groups interested in measuring head posture. In this paper we compare the Kinect's head pose estimates with a marker-based record of ground truth in order to establish its accuracy. During movement of the head and neck alone (with static torso), we find average errors in absolute yaw, pitch and roll angles of 2.0±1.2°, 7.3±3.2° and 2.6±0.7°, and in rotations relative to the rest pose of 1.4±0.5°, 2.1±0.4° and 2.0±0.8°. Larger head rotations where it becomes difficult to see facial features can cause estimation to fail (10.2±6.1% of all poses in our static torso range of motion tests) but we found no significant changes in performance with the participant standing further away from Kinect - additionally enabling full-body pose estimation - or without performing face shape calibration, something which is not always possible for younger or disabled participants. Where facial features remain visible, the sensor has applications in the non-invasive assessment of postural control, e.g. during a programme of physical therapy. In particular, a multi-Kinect setup covering the full range of head (and body) movement would appear to be a promising way forward.

  8. Augmenting ViSP's 3D Model-Based Tracker with RGB-D SLAM for 3D Pose Estimation in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Li-Chee-Ming, J.; Armenakis, C.

    2016-06-01

    This paper presents a novel application of the Visual Servoing Platform's (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP's pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera's field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP's pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.

  9. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  10. Estimating 3D movements from 2D observations using a continuous model of helical swimming.

    PubMed

    Gurarie, Eliezer; Grünbaum, Daniel; Nishizaki, Michael T

    2011-06-01

    Helical swimming is among the most common movement behaviors in a wide range of microorganisms, and these movements have direct impacts on distributions, aggregations, encounter rates with prey, and many other fundamental ecological processes. Microscopy and video technology enable the automated acquisition of large amounts of tracking data; however, these data are typically two-dimensional. The difficulty of quantifying the third movement component complicates understanding of the biomechanical causes and ecological consequences of helical swimming. We present a versatile continuous stochastic model-the correlated velocity helical movement (CVHM) model-that characterizes helical swimming with intrinsic randomness and autocorrelation. The model separates an organism's instantaneous velocity into a slowly varying advective component and a perpendicularly oriented rotation, with velocities, magnitude of stochasticity, and autocorrelation scales defined for both components. All but one of the parameters of the 3D model can be estimated directly from a two-dimensional projection of helical movement with no numerical fitting, making it computationally very efficient. As a case study, we estimate swimming parameters from videotaped trajectories of a toxic unicellular alga, Heterosigma akashiwo (Raphidophyceae). The algae were reared from five strains originally collected from locations in the Atlantic and Pacific Oceans, where they have caused Harmful Algal Blooms (HABs). We use the CVHM model to quantify cell-level and strain-level differences in all movement parameters, demonstrating the utility of the model for identifying strains that are difficult to distinguish by other means.

  11. 3D Model Uncertainty in Estimating the Inner Edge of the Habitable Zone

    NASA Astrophysics Data System (ADS)

    Abbot, D. S.; Yang, J.; Wolf, E. T.; Leconte, J.; Merlis, T. M.; Koll, D. D. B.; Goldblatt, C.; Ding, F.; Forget, F.; Toon, B.

    2015-12-01

    Accurate estimates of the width of the habitable zone are critical for determining which exoplanets are potentially habitable and estimating the frequency of Earth-like planets in the galaxy. Recently, the inner edge of the habitable zone has been calculated using 3D atmospheric general circulation models (GCMs) that include the effects of subsaturation and clouds, but different models obtain different results. We study potential sources of differences in five GCMs through a series of comparisons of radiative transfer, clouds, and dynamical cores for a rapidly rotating planet around the Sun and a synchronously rotating planet around an M star. We find that: (1) Cloud parameterization leads to the largest differences among the models; (2) Differences in water vapor longwave radiative transfer are moderate as long as the surface temperature is lower than 360 K; (3) Differences in shortwave absorption influences atmospheric humidity of synchronously rotating planet through a positive feedback; (4) Differences in atmospheric dynamical core have a very small effect on the surface temperature; and (5) Rayleigh scattering leads to very small differences among models. These comparisons suggest that future model development should focus on clouds and water vapor radiative transfer.

  12. Spent Fuel Ratio Estimates from Numerical Models in ALE3D

    SciTech Connect

    Margraf, J. D.; Dunn, T. A.

    2016-08-02

    Potential threat of intentional sabotage of spent nuclear fuel storage facilities is of significant importance to national security. Paramount is the study of focused energy attacks on these materials and the potential release of aerosolized hazardous particulates into the environment. Depleted uranium oxide (DUO2) is often chosen as a surrogate material for testing due to the unreasonable cost and safety demands for conducting full-scale tests with real spent nuclear fuel. To account for differences in mechanical response resulting in changes to particle distribution it is necessary to scale the DUO2 results to get a proper measure for spent fuel. This is accomplished with the spent fuel ratio (SFR), the ratio of respirable aerosol mass released due to identical damage conditions between a spent fuel and a surrogate material like depleted uranium oxide (DUO2). A very limited number of full-scale experiments have been carried out to capture this data, and the oft-questioned validity of the results typically leads to overly-conservative risk estimates. In the present work, the ALE3D hydrocode is used to simulate DUO2 and spent nuclear fuel pellets impacted by metal jets. The results demonstrate an alternative approach to estimate the respirable release fraction of fragmented nuclear fuel.

  13. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  14. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-31

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm 2.5 cm 2.5 cm]).

  15. Angle estimation of simultaneous orthogonal rotations from 3D gyroscope measurements.

    PubMed

    Stančin, Sara; Tomažič, Sašo

    2011-01-01

    A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation.

  16. 3D beam shape estimation based on distributed coaxial cable interferometric sensor

    NASA Astrophysics Data System (ADS)

    Cheng, Baokai; Zhu, Wenge; Liu, Jie; Yuan, Lei; Xiao, Hai

    2017-03-01

    We present a coaxial cable interferometer based distributed sensing system for 3D beam shape estimation. By making a series of reflectors on a coaxial cable, multiple Fabry–Perot cavities are created on it. Two cables are mounted on the beam at proper locations, and a vector network analyzer (VNA) is connected to them to obtain the complex reflection signal, which is used to calculate the strain distribution of the beam in horizontal and vertical planes. With 6 GHz swept bandwidth on the VNA, the spatial resolution for distributed strain measurement is 0.1 m, and the sensitivity is 3.768 MHz mε ‑1 at the interferogram dip near 3.3 GHz. Using displacement-strain transformation, the shape of the beam is reconstructed. With only two modified cables and a VNA, this system is easy to implement and manage. Comparing to optical fiber based sensor systems, the coaxial cable sensors have the advantage of large strain and robustness, making this system suitable for structure health monitoring applications.

  17. Integration of camera and range sensors for 3D pose estimation in robot visual servoing

    NASA Astrophysics Data System (ADS)

    Hulls, Carol C. W.; Wilson, William J.

    1998-10-01

    Range-vision sensor systems can incorporate range images or single point measurements. Research incorporating point range measurements has focused on the area of map generation for mobile robots. These systems can utilize the fact that the objects sensed tend to be large and planar. The approach presented in this paper fuses information obtained from a point range measurement with visual information to produce estimates of the relative 3D position and orientation of a small, non-planar object with respect to a robot end- effector. The paper describes a real-time sensor fusion system for performing dynamic visual servoing using a camera and a point laser range sensor. The system is based upon the object model reference approach. This approach, which can be used to develop multi-sensor fusion systems that fuse dynamic sensor data from diverse sensors in real-time, uses a description of the object to be sensed in order to develop a combined observation-dependency sensor model. The range- vision sensor system is evaluated in terms of accuracy and robustness. The results show that the use of a range sensor significantly improves the system performance when there is poor or insufficient camera information. The system developed is suitable for visual servoing applications, particularly robot assembly operations.

  18. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  19. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future

  20. Quantitative measurement of eyestrain on 3D stereoscopic display considering the eye foveation model and edge information.

    PubMed

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-05-15

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  1. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation.

    PubMed

    Jerbi, Taha; Burdin, Valerie; Leboucher, Julien; Stindel, Eric; Roux, Christian

    2013-03-01

    In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.

  2. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  3. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  4. The estimation of 3D SAR distributions in the human head from mobile phone compliance testing data for epidemiological studies.

    PubMed

    Wake, Kanako; Varsier, Nadège; Watanabe, Soichi; Taki, Masao; Wiart, Joe; Mann, Simon; Deltour, Isabelle; Cardis, Elisabeth

    2009-10-07

    A worldwide epidemiological study called 'INTERPHONE' has been conducted to estimate the hypothetical relationship between brain tumors and mobile phone use. In this study, we proposed a method to estimate 3D distribution of the specific absorption rate (SAR) in the human head due to mobile phone use to provide the exposure gradient for epidemiological studies. 3D SAR distributions due to exposure to an electromagnetic field from mobile phones are estimated from mobile phone compliance testing data for actual devices. The data for compliance testing are measured only on the surface in the region near the device and in a small 3D region around the maximum on the surface in a homogeneous phantom with a specific shape. The method includes an interpolation/extrapolation and a head shape conversion. With the interpolation/extrapolation, SAR distributions in the whole head are estimated from the limited measured data. 3D SAR distributions in the numerical head models, where the tumor location is identified in the epidemiological studies, are obtained from measured SAR data with the head shape conversion by projection. Validation of the proposed method was performed experimentally and numerically. It was confirmed that the proposed method provided good estimation of 3D SAR distribution in the head, especially in the brain, which is the tissue of major interest in epidemiological studies. We conclude that it is possible to estimate 3D SAR distributions in a realistic head model from the data obtained by compliance testing measurements to provide a measure for the exposure gradient in specific locations of the brain for the purpose of exposure assessment in epidemiological studies. The proposed method has been used in several studies in the INTERPHONE.

  5. Estimation of uncertainties in geological 3D raster layer models as integral part of modelling procedures

    NASA Astrophysics Data System (ADS)

    Maljers, Denise; den Dulk, Maryke; ten Veen, Johan; Hummelman, Jan; Gunnink, Jan; van Gessel, Serge

    2016-04-01

    The Geological Survey of the Netherlands (GSN) develops and maintains subsurface models with regional to national coverage. These models are paramount for petroleum exploration in conventional reservoirs, for understanding the distribution of unconventional reservoirs, for mapping geothermal aquifers, for the potential to store carbon, or for groundwater- or aggregate resources. Depending on the application domain these models differ in depth range, scale, data used, modelling software and modelling technique. Depth uncertainty information is available for the Geological Survey's 3D raster layer models DGM Deep and DGM Shallow. These models cover different depth intervals and are constructed using different data types and different modelling software. Quantifying the uncertainty of geological models that are constructed using multiple data types as well as geological expert-knowledge is not straightforward. Examples of geological expert-knowledge are trend surfaces displaying the regional thickness trends of basin fills or steering points that are used to guide the pinching out of geological formations or the modelling of the complex stratal geometries associated with saltdomes and saltridges. This added a-priori knowledge, combined with the assumptions underlying kriging (normality and second-order stationarity), makes the kriging standard error an incorrect measure of uncertainty for our geological models. Therefore the methods described below were developed. For the DGM Deep model a workflow has been developed to assess uncertainty by combining precision (giving information on the reproducibility of the model results) and accuracy (reflecting the proximity of estimates to the true value). This was achieved by centering the resulting standard deviations around well-tied depths surfaces. The standard deviations are subsequently modified by three other possible error sources: data error, structural complexity and velocity model error. The uncertainty workflow

  6. A new estimate of the global 3D geostrophic ocean circulation based on satellite data and in-situ measurements

    NASA Astrophysics Data System (ADS)

    Mulet, S.; Rio, M.-H.; Mignot, A.; Guinehut, S.; Morrow, R.

    2012-11-01

    A new estimate of the Global Ocean 3D geostrophic circulation from the surface down to 1500 m depth (Surcouf3D) has been computed for the 1993-2008 period using an observation-based approach that combines altimetry with temperature and salinity through the thermal wind equation. The validity of this simple approach was tested using a consistent dataset from a model reanalysis. Away from the boundary layers, errors are less than 10% in most places, which indicate that the thermal wind equation is a robust approximation to reconstruct the 3D oceanic circulation in the ocean interior. The Surcouf3D current field was validated in the Atlantic Ocean against in-situ observations. We considered the ANDRO current velocities deduced at 1000 m depth from Argo float displacements as well as velocity measurements at 26.5°N from the RAPID-MOCHA current meter array. The Surcouf3D currents show similar skill to the 3D velocities from the GLORYS Mercator Ocean reanalysis in reproducing the amplitude and variability of the ANDRO currents. In the upper 1000 m, high correlations are also found with in-situ velocities measured by the RAPID-MOCHA current meters. The Surcouf3D current field was then used to compute estimates of the Atlantic Meridional Overturning Circulation (AMOC) through the 25°N section, showing good comparisons with hydrographic sections from 1998 and 2004. Monthly averaged AMOC time series are also consistent with the RAPID-MOCHA array and with the GLORYS Mercator Ocean reanalysis over the April 2004-September 2007 period. Finally a 15 years long time series of monthly estimates of the AMOC was computed. The AMOC strength has a mean value of 16 Sv with an annual (resp. monthly) standard deviation of 2.4 Sv (resp. 7.1 Sv) over the 1993-2008 period. The time series, characterized by a strong variability, shows no significant trend.

  7. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  8. Improved 3D Look-Locker Acquisition Scheme and Angle Map Filtering Procedure for T1 Estimation

    PubMed Central

    Hui, CheukKai; Esparza-Coss, Emilio; Narayana, Ponnada A

    2013-01-01

    The 3D Look-Locker (LL) acquisition is a widely used fast and efficient T1 mapping method. However, the multi-shot approach of 3D LL acquisition can introduce reconstruction artifacts that result in intensity distortions. Traditional 3D LL acquisition generally utilizes centric encoding scheme that is limited to a single phase encoding direction in the k-space. To optimize the k-space segmentation, an elliptical scheme with two phase encoding directions is implemented for the LL acquisition. This elliptical segmentation can reduce the intensity errors in the reconstructed images and improve the final T1 estimation. One of the major sources of error in LL based T1 estimation is lack of accurate knowledge of the actual flip angle. Multi-parameter curve fitting procedure can account for some of the variability in the flip angle. However, curve fitting can also introduce errors in the estimated flip angle that can result in incorrect T1 values. A filtering procedure based on goodness of fit (GOF) is proposed to reduce the effect of false flip angle estimates. Filtering based on the GOF weighting can remove likely incorrect angles that result in bad curve fit. Simulation, phantom, and in-vivo studies have demonstrated that these techniques can improve the accuracy of 3D LL T1 estimation. PMID:23784967

  9. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  10. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  11. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  12. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  13. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.

  14. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    5 2.2 Photogrammetry ...focus on particle filters. 2.2 Photogrammetry Photogrammetry is the process of determining 3-D coordinates through images. The mathematical underpinnings...of photogrammetry are rooted in the 1480s with Leonardo da Vinci’s study of perspectives [8, p. 1]. However, digital photogrammetry did not emerge

  15. 3D ultrasound estimation of the effective volume for popliteal block at the level of division.

    PubMed

    Sala-Blanch, X; Franco, J; Bergé, R; Marín, R; López, A M; Agustí, M

    2017-03-01

    Local anaesthetic injection between the tibial and commmon peroneal nerves within connective tissue sheath results in a predictable diffusion and allows for a reduction in the volume needed to achieve a consistent sciatic popliteal block. Using 3D ultrasound volumetric acquisition, we quantified the visible volume in contact with the nerve along a 5cm segment.

  16. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging

    PubMed Central

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  17. SU-E-J-135: An Investigation of Ultrasound Imaging for 3D Intra-Fraction Prostate Motion Estimation

    SciTech Connect

    O'Shea, T; Harris, E; Bamber, J; Evans, P

    2014-06-01

    Purpose: This study investigates the use of a mechanically swept 3D ultrasound (US) probe to estimate intra-fraction motion of the prostate during radiation therapy using an US phantom and simulated transperineal imaging. Methods: A 3D motion platform was used to translate an US speckle phantom while simulating transperineal US imaging. Motion patterns for five representative types of prostate motion, generated from patient data previously acquired with a Calypso system, were using to move the phantom in 3D. The phantom was also implanted with fiducial markers and subsequently tracked using the CyberKnife kV x-ray system for comparison. A normalised cross correlation block matching algorithm was used to track speckle patterns in 3D and 2D US data. Motion estimation results were compared with known phantom translations. Results: Transperineal 3D US could track superior-inferior (axial) and anterior-posterior (lateral) motion to better than 0.8 mm root-mean-square error (RMSE) at a volume rate of 1.7 Hz (comparable with kV x-ray tracking RMSE). Motion estimation accuracy was poorest along the US probe's swept axis (right-left; RL; RMSE < 4.2 mm) but simple regularisation methods could be used to improve RMSE (< 2 mm). 2D US was found to be feasible for slowly varying motion (RMSE < 0.5 mm). 3D US could also allow accurate radiation beam gating with displacement thresholds of 2 mm and 5 mm exhibiting a RMSE of less than 0.5 mm. Conclusion: 2D and 3D US speckle tracking is feasible for prostate motion estimation during radiation delivery. Since RL prostate motion is small in magnitude and frequency, 2D or a hybrid (2D/3D) US imaging approach which also accounts for potential prostate rotations could be used. Regularisation methods could be used to ensure the accuracy of tracking data, making US a feasible approach for gating or tracking in standard or hypo-fractionated prostate treatments.

  18. Calibration Method for ML Estimation of 3D Interaction Position in a Thick Gamma-Ray Detector

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    High-energy (> 100 keV) photon detectors are often made thick relative to their lateral resolution in order to improve their photon-detection efficiency. To avoid issues of parallax and increased signal variance that result from random interaction depth, we must determine the 3D interaction position in the imaging detector. With this goal in mind, we examine a method of calibrating response statistics of a thick-detector gamma camera to produce a maximum-likelihood estimate of 3D interaction position. We parameterize the mean detector response as a function of 3D position, and we estimate these parameters by maximizing their likelihood given prior knowledge of the pathlength distribution and a complete list of camera signals for an ensemble of gamma-ray interactions. Furthermore, we describe an iterative method for removing multiple-interaction events from our calibration data and for refining our calibration of the mean detector response to single interactions. We demonstrate this calibration method with simulated gamma-camera data. We then show that the resulting calibration is accurate and can be used to produce unbiased estimates of 3D interaction position. PMID:20191099

  19. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  20. Model-based 3D human shape estimation from silhouettes for virtual fitting

    NASA Astrophysics Data System (ADS)

    Saito, Shunta; Kouchi, Makiko; Mochimaru, Masaaki; Aoki, Yoshimitsu

    2014-03-01

    We propose a model-based 3D human shape reconstruction system from two silhouettes. Firstly, we synthesize a deformable body model from 3D human shape database consists of a hundred whole body mesh models. Each mesh model is homologous, so that it has the same topology and same number of vertices among all models. We perform principal component analysis (PCA) on the database and synthesize an Active Shape Model (ASM). ASM allows changing the body type of the model with a few parameters. The pose changing of our model can be achieved by reconstructing the skeleton structures from implanted joints of the model. By applying pose changing after body type deformation, our model can represents various body types and any pose. We apply the model to the problem of 3D human shape reconstruction from front and side silhouette. Our approach is simply comparing the contours between the model's and input silhouettes', we then use only torso part contour of the model to reconstruct whole shape. We optimize the model parameters by minimizing the difference between corresponding silhouettes by using a stochastic, derivative-free non-linear optimization method, CMA-ES.

  1. Estimation of the thermal conductivity of hemp based insulation material from 3D tomographic images

    NASA Astrophysics Data System (ADS)

    El-Sawalhi, R.; Lux, J.; Salagnac, P.

    2016-08-01

    In this work, we are interested in the structural and thermal characterization of natural fiber insulation materials. The thermal performance of these materials depends on the arrangement of fibers, which is the consequence of the manufacturing process. In order to optimize these materials, thermal conductivity models can be used to correlate some relevant structural parameters with the effective thermal conductivity. However, only a few models are able to take into account the anisotropy of such material related to the fibers orientation, and these models still need realistic input data (fiber orientation distribution, porosity, etc.). The structural characteristics are here directly measured on a 3D tomographic image using advanced image analysis techniques. Critical structural parameters like porosity, pore and fiber size distribution as well as local fiber orientation distribution are measured. The results of the tested conductivity models are then compared with the conductivity tensor obtained by numerical simulation on the discretized 3D microstructure, as well as available experimental measurements. We show that 1D analytical models are generally not suitable for assessing the thermal conductivity of such anisotropic media. Yet, a few anisotropic models can still be of interest to relate some structural parameters, like the fiber orientation distribution, to the thermal properties. Finally, our results emphasize that numerical simulations on 3D realistic microstructure is a very interesting alternative to experimental measurements.

  2. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing.

  3. Atmospheric Nitrogen Trifluoride: Optimized emission estimates using 2-D and 3-D Chemical Transport Models from 1973-2008

    NASA Astrophysics Data System (ADS)

    Ivy, D. J.; Rigby, M. L.; Prinn, R. G.; Muhle, J.; Weiss, R. F.

    2009-12-01

    We present optimized annual global emissions from 1973-2008 of nitrogen trifluoride (NF3), a powerful greenhouse gas which is not currently regulated by the Kyoto Protocol. In the past few decades, NF3 production has dramatically increased due to its usage in the semiconductor industry. Emissions were estimated through the 'pulse-method' discrete Kalman filter using both a simple, flexible 2-D 12-box model used in the Advanced Global Atmospheric Gases Experiment (AGAGE) network and the Model for Ozone and Related Tracers (MOZART v4.5), a full 3-D atmospheric chemistry model. No official audited reports of industrial NF3 emissions are available, and with limited information on production, a priori emissions were estimated using both a bottom-up and top-down approach with two different spatial patterns based on semiconductor perfluorocarbon (PFC) emissions from the Emission Database for Global Atmospheric Research (EDGAR v3.2) and Semiconductor Industry Association sales information. Both spatial patterns used in the models gave consistent results, showing the robustness of the estimated global emissions. Differences between estimates using the 2-D and 3-D models can be attributed to transport rates and resolution differences. Additionally, new NF3 industry production and market information is presented. Emission estimates from both the 2-D and 3-D models suggest that either the assumed industry release rate of NF3 or industry production information is still underestimated.

  4. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  5. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    PubMed Central

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  6. Transient Hydraulic Tomography in the Field: 3-D K Estimation and Validation in a Highly Heterogeneous Unconfined Aquifer

    NASA Astrophysics Data System (ADS)

    Hochstetler, D. L.; Barrash, W.; Kitanidis, P. K.

    2014-12-01

    Characterizing subsurface hydraulic properties is essential for predicting flow and transport, and thus, for making informed decisions, such as selection and execution of a groundwater remediation strategy; however, obtaining accurate estimates at the necessary resolution with quantified uncertainty is an ongoing challenge. For over a decade, the development of hydraulic tomography (HT) - i.e., conducting a series of discrete interval hydraulic tests, observing distributed pressure signals, and analyzing the data through inversion of all tests together - has shown promise as a subsurface imaging method. Numerical and laboratory 3-D HT studies have enhanced and validated such methodologies, but there have been far fewer 3-D field characterization studies. We use 3-D transient hydraulic tomography (3-D THT) to characterize a highly heterogeneous unconfined alluvial aquifer at an active industrial site near Assemini, Italy. With 26 pumping tests conducted from 13 isolated vertical locations, and pressure responses measured at 63 spatial locations through five clusters of continuous multichannel tubing, we recorded over 800 drawdown curves during the field testing. Selected measurements from each curve were inverted in order to obtain an estimate of the distributed hydraulic conductivity field K(x) as well as uniform ("effective") values of specific storage Ss and specific yield Sy. The estimated K values varied across seven orders of magnitude, suggesting that this is one of the most heterogeneous sites at which HT has ever been conducted. Furthermore, these results are validated using drawdown observations from seven independent tests with pumping performed at multiple locations other than the main pumping well. The validation results are encouraging, especially given the uncertain nature of the problem. Overall, this research demonstrates the ability of 3-D THT to provide high-resolution of structure and local K at a non-research site at the scale of a contaminant

  7. Quantitative, nondestructive estimates of coarse root biomass in a temperate pine forest using 3-D ground-penetrating radar (GPR)

    NASA Astrophysics Data System (ADS)

    Molon, Michelle; Boyce, Joseph I.; Arain, M. Altaf

    2017-01-01

    Coarse root biomass was estimated in a temperate pine forest using high-resolution (1 GHz) 3-D ground-penetrating radar (GPR). GPR survey grids were acquired across a 400 m2 area with varying line spacing (12.5 and 25 cm). Root volume and biomass were estimated directly from the 3-D radar volume by using isometric surfaces calculated with the marching cubes algorithm. Empirical relations between GPR reflection amplitude and root diameter were determined for 14 root segments (0.1-10 cm diameter) reburied in a 6 m2 experimental test plot and surveyed at 5-25 cm line spacing under dry and wet soil conditions. Reburied roots >1.4 cm diameter were detectable as continuous root structures with 5 cm line separation. Reflection amplitudes were strongly controlled by soil moisture and decreased by 40% with a twofold increase in soil moisture. GPR line intervals of 12.5 and 25 cm produced discontinuous mapping of roots, and GPR coarse root biomass estimates (0.92 kgC m-2) were lower than those obtained previously with a site-specific allometric equation due to nondetection of vertical roots and roots <1.5 cm diameter. The results show that coarse root volume and biomass can be estimated directly from interpolated 3-D GPR volumes by using a marching cubes approach, but mapping of roots as continuous structures requires high inline sampling and line density (<5 cm). The results demonstrate that 3-D GPR is viable approach for estimating belowground carbon and for mapping tree root architecture. This methodology can be applied more broadly in other disciplines (e.g., archaeology and civil engineering) for imaging buried structures.

  8. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  9. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  10. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  11. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2004-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  12. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  13. Intensity of joints associated with an extensional fault zone: an estimation by poly3d .

    NASA Astrophysics Data System (ADS)

    Minelli, G.

    2003-04-01

    The presence and frequency of joints in sedimentary rocks strongly affects the mechanical and fluid flow properties of the host layers. Joints intensity is evaluated by spacing, S, the distance between neighbouring fractures, or by density, D = 1/S. Joint spacing in layered rocks is often linearly related to layer thickness T, with typical values of 0.5 T < S < 2.0 T . On the other hand, some field cases display very tight joints with S << T and nonlinear relations between spacing and thickness , most of these cases are related to joint system “genetically” related to a nearby fault zone. The present study by using the code Poly3D (Rock Fracture Project at Stanford), numerically explores the effect of the stress distribution in the neighbour of an extensional fault zone with respect to the mapped intensity of joints both in the hanging wall and in the foot wall of it (WILLEMSE, E. J. M., 1997; MARTEL, S. J, AND BOGER, W. A,; 1998). Poly3D is a C language computer program that calculates the displacements, strains and stresses induced in an elastic whole or half-space by planar, polygonal-shaped elements of displacement discontinuity (WILLEMSE, E. J. M., POLLARD, D. D., 2000) Dislocations of varying shapes may be combined to yield complex three-dimensional surfaces well-suited for modeling fractures, faults, and cavities in the earth's crust. The algebraic expressions for the elastic fields around a polygonal element are derived by superposing the solution for an angular dislocation in an elastic half-space. The field data have been collected in a quarry located close to Noci town (Puglia) by using the scan line methodology. In this quarry a platform limestone with a regular bedding with very few shale or marly intercalations displaced by a normal fault are exposed. The comparison between the mapped joints intensity and the calculated stress around the fault displays a good agreement. Nevertheless the intrinsic limitations (isotropic medium and elastic behaviour

  14. Effect of GIA models with 3D composite mantle viscosity on GRACE mass balance estimates for Antarctica

    NASA Astrophysics Data System (ADS)

    van der Wal, Wouter; Whitehouse, Pippa L.; Schrama, Ernst J. O.

    2015-03-01

    Seismic data indicate that there are large viscosity variations in the mantle beneath Antarctica. Consideration of such variations would affect predictions of models of Glacial Isostatic Adjustment (GIA), which are used to correct satellite measurements of ice mass change. However, most GIA models used for that purpose have assumed the mantle to be uniformly stratified in terms of viscosity. The goal of this study is to estimate the effect of lateral variations in viscosity on Antarctic mass balance estimates derived from the Gravity Recovery and Climate Experiment (GRACE) data. To this end, recently-developed global GIA models based on lateral variations in mantle temperature are tuned to fit constraints in the northern hemisphere and then compared to GPS-derived uplift rates in Antarctica. We find that these models can provide a better fit to GPS uplift rates in Antarctica than existing GIA models with a radially-varying (1D) rheology. When 3D viscosity models in combination with specific ice loading histories are used to correct GRACE measurements, mass loss in Antarctica is smaller than previously found for the same ice loading histories and their preferred 1D viscosity profiles. The variation in mass balance estimates arising from using different plausible realizations of 3D viscosity amounts to 20 Gt/yr for the ICE-5G ice model and 16 Gt/yr for the W12a ice model; these values are larger than the GRACE measurement error, but smaller than the variation arising from unknown ice history. While there exist 1D Earth models that can reproduce the total mass balance estimates derived using 3D Earth models, the spatial pattern of gravity rates can be significantly affected by 3D viscosity in a way that cannot be reproduced by GIA models with 1D viscosity. As an example, models with 1D viscosity always predict maximum gravity rates in the Ross Sea for the ICE-5G ice model, however, for one of the three preferred 3D models the maximum (for the same ice model) is found

  15. Application of optical 3D measurement on thin film buckling to estimate interfacial toughness

    NASA Astrophysics Data System (ADS)

    Jia, H. K.; Wang, S. B.; Li, L. A.; Wang, Z. Y.; Goudeau, P.

    2014-03-01

    The shape-from-focus (SFF) method has been widely studied as a passive depth recovery and 3D reconstruction method for digital images. An important step in SFF is the calculation of the focus level for different points in an image by using a focus measure. In this work, an image entropy-based focus measure is introduced into the SFF method to measure the 3D buckling morphology of an aluminum film on a polymethylmethacrylate (PMMA) substrate at a micro scale. Spontaneous film wrinkles and telephone-cord wrinkles are investigated after the deposition of a 300 nm thick aluminum film onto the PMMA substrate. Spontaneous buckling is driven by the highly compressive stress generated in the Al film during the deposition process. The interfacial toughness between metal films and substrates is an important parameter for the reliability of the film/substrate system. The height profiles of different sections across the telephone-cord wrinkle can be considered a straight-sided model with uniform width and height or a pinned circular model that has a delamination region characterized by a sequence of connected sectors. Furthermore, the telephone-cord geometry of the thin film can be used to calculate interfacial toughness. The instability of the finite element model is introduced to fit the buckling morphology obtained by SFF. The interfacial toughness is determined to be 0.203 J/m2 at a 70.4° phase angle from the straight-sided model and 0.105 J/m2 at 76.9° from the pinned circular model.

  16. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  17. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  18. CO2 mass estimation visible in time-lapse 3D seismic data from a saline aquifer and uncertainties

    NASA Astrophysics Data System (ADS)

    Ivanova, A.; Lueth, S.; Bergmann, P.; Ivandic, M.

    2014-12-01

    At Ketzin (Germany) the first European onshore pilot scale project for geological storage of CO2 was initiated in 2004. This project is multidisciplinary and includes 3D time-lapse seismic monitoring. A 3D pre-injection seismic survey was acquired in 2005. Then CO2 injection into a sandstone saline aquifer started at a depth of 650 m in 2008. A 1st 3D seismic repeat survey was acquired in 2009 after 22 kilotons had been injected. The imaged CO2 signature was concentrated around the injection well (200-300 m). A 2nd 3D seismic repeat survey was acquired in 2012 after 61 kilotons had been injected. The imaged CO2 signature further extended (100-200 m). The injection was terminated in 2013. Totally 67 kilotons of CO2 were injected. Time-lapse seismic processing, petrophysical data and geophysical logging on CO2 saturation have allowed for an estimate of the amount of CO2 visible in the seismic data. This estimate is dependent upon a choice of a number of parameters and contains a number of uncertainties. The main uncertainties are following. The constant reservoir porosity and CO2 density used for the estimation are probably an over-simplification since the reservoir is quite heterogeneous. May be velocity dispersion is present in the Ketzin reservoir rocks, but we do not consider it to be large enough that it could affect the mass of CO2 in our estimation. There are only a small number of direct petrophysical observations, providing a weak statistical basis for the determination of seismic velocities based on CO2 saturation and we have assumed that the petrophysical experiments were carried out on samples that are representative for the average properties of the whole reservoir. Finally, the most of the time delay values in the both 3D seismic repeat surveys within the amplitude anomaly are near the noise level of 1-2 ms, however a change of 1 ms in the time delay affects significantly the mass estimate, thus the choice of the time-delay cutoff is crucial. In spite

  19. Dynamics of errors in 3D motion estimation and implications for strain-tensor imaging in acoustic elastography

    NASA Astrophysics Data System (ADS)

    Bilgen, Mehmet

    2000-06-01

    For the purpose of quantifying the noise in acoustic elastography, a displacement covariance matrix is derived analytically for the cross-correlation based 3D motion estimator. Static deformation induced in tissue from an external mechanical source is represented by a second-order strain tensor. A generalized 3D model is introduced for the ultrasonic echo signals. The components of the covariance matrix are related to the variances of the displacement errors and the errors made in estimating the elements of the strain tensor. The results are combined to investigate the dependences of these errors on the experimental and signal-processing parameters as well as to determine the effects of one strain component on the estimation of the other. The expressions are evaluated for special cases of axial strain estimation in the presence of axial, axial-shear and lateral-shear type deformations in 2D. The signals are shown to decorrelate with any of these deformations, with strengths depending on the reorganization and interaction of tissue scatterers with the ultrasonic point spread function following the deformation. Conditions that favour the improvements in motion estimation performance are discussed, and advantages gained by signal companding and pulse compression are illustrated.

  20. Estimating the composition of hydrates from a 3D seismic dataset near Penghu Canyon on Chinese passive margin offshore Taiwan

    NASA Astrophysics Data System (ADS)

    Chi, Wu-Cheng

    2016-04-01

    A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.

  1. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    SciTech Connect

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-06-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  2. Sparse array 3-D ISAR imaging based on maximum likelihood estimation and CLEAN technique.

    PubMed

    Ma, Changzheng; Yeo, Tat Soon; Tan, Chee Seng; Tan, Hwee Siang

    2010-08-01

    Large 2-D sparse array provides high angular resolution microwave images but artifacts are also induced by the high sidelobes of the beam pattern, thus, limiting its dynamic range. CLEAN technique has been used in the literature to extract strong scatterers for use in subsequent signal cancelation (artifacts removal). However, the performance of DFT parameters estimation based CLEAN algorithm for the estimation of the signal amplitudes is known to be poor, and this affects the signal cancelation. In this paper, DFT is used only to provide the initial estimates, and the maximum likelihood parameters estimation method with steepest descent implementation is then used to improve the precision of the calculated scatterers positions and amplitudes. Time domain information is also used to reduce the sidelobe levels. As a result, clear, artifact-free images could be obtained. The effects of multiple reflections and rotation speed estimation error are also discussed. The proposed method has been verified using numerical simulations and it has been shown to be effective.

  3. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  4. Suspect Height Estimation Using the Faro Focus(3D) Laser Scanner.

    PubMed

    Johnson, Monique; Liscio, Eugene

    2015-11-01

    At present, very little research has been devoted to investigating the ability of laser scanning technology to accurately measure height from surveillance video. The goal of this study was to test the accuracy of one particular laser scanner to estimate suspect height from video footage. The known heights of 10 individuals were measured using an anthropometer. The individuals were then recorded on video walking along a predetermined path in a simulated crime scene environment both with and without headwear. The difference between the known heights and the estimated heights obtained from the laser scanner software were compared using a one-way t-test. The height estimates obtained from the software were not significantly different from the known heights whether individuals were wearing headwear (p = 0.186) or not (p = 0.707). Thus, laser scanning is one technique that could potentially be used by investigators to determine suspect height from video footage.

  5. Growth trajectories of the human fetal brain tissues estimated from 3D reconstructed in utero MRI

    PubMed Central

    Scott, Julia A.; Habas, Piotr A.; Kim, Kio; Rajagopalan, Vidya; Hamzelou, Kia S.; Corbett-Detig, James M.; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    In the latter half of gestation (20 to 40 gestational weeks), human brain growth accelerates in conjunction with cortical folding and the deceleration of ventricular zone progenitor cell proliferation. These processes are reflected in changes in the volume of respective fetal tissue zones. Thus far, growth trajectories of the fetal tissue zones have been extracted primarily from 2D measurements on histological sections and magnetic resonance imaging (MRI). In this study, the volumes of major fetal zones—cortical plate (CP), subplate and intermediate zone (SP+IZ), germinal matrix (GMAT), deep gray nuclei (DG), and ventricles (VENT)—are calculated from automatic segmentation of motion-corrected, 3D reconstructed MRI. We analyzed 48 T2-weighted MRI scans from 39 normally developing fetuses in utero between 20.57 and 31.14 gestational weeks (GW). The supratentorial volume (STV) increased linearly at a rate of 15.22% per week. The SP+IZ (14.75% per week) and DG (15.56% per week) volumes increased at similar rates. The CP increased at a greater relative rate (18.00% per week), while the VENT (9.18% per week) changed more slowly. Therefore, CP increased as a fraction of STV and the VENT fraction declined. The total GMAT volume slightly increased then decreased after 25 GW. We did not detect volumetric sexual dimorphisms or total hemispheric volume asymmetries, which may emerge later in gestation. Further application of the automated fetal brain segmentation to later gestational ages will bridge the gap between volumetric studies of premature brain development and normal brain development in utero. PMID:21530634

  6. Growth trajectories of the human fetal brain tissues estimated from 3D reconstructed in utero MRI.

    PubMed

    Scott, Julia A; Habas, Piotr A; Kim, Kio; Rajagopalan, Vidya; Hamzelou, Kia S; Corbett-Detig, James M; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-08-01

    In the latter half of gestation (20-40 gestational weeks), human brain growth accelerates in conjunction with cortical folding and the deceleration of ventricular zone progenitor cell proliferation. These processes are reflected in changes in the volume of respective fetal tissue zones. Thus far, growth trajectories of the fetal tissue zones have been extracted primarily from 2D measurements on histological sections and magnetic resonance imaging (MRI). In this study, the volumes of major fetal zones-cortical plate (CP), subplate and intermediate zone (SP+IZ), germinal matrix (GMAT), deep gray nuclei (DG), and ventricles (VENT)--are calculated from automatic segmentation of motion-corrected, 3D reconstructed MRI. We analyzed 48 T2-weighted MRI scans from 39 normally developing fetuses in utero between 20.57 and 31.14 gestational weeks (GW). The supratentorial volume (STV) increased linearly at a rate of 15.22% per week. The SP+IZ (14.75% per week) and DG (15.56% per week) volumes increased at similar rates. The CP increased at a greater relative rate (18.00% per week), while the VENT (9.18% per week) changed more slowly. Therefore, CP increased as a fraction of STV and the VENT fraction declined. The total GMAT volume slightly increased then decreased after 25 GW. We did not detect volumetric sexual dimorphisms or total hemispheric volume asymmetries, which may emerge later in gestation. Further application of the automated fetal brain segmentation to later gestational ages will bridge the gap between volumetric studies of premature brain development and normal brain development in utero.

  7. Novel methods for estimating 3D distributions of radioactive isotopes in materials

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Kataoka, J.; Kishimoto, A.; Nishiyama, T.; Taya, T.; Okochi, H.; Ogata, H.; Yamamoto, S.

    2016-09-01

    In recent years, various gamma-ray visualization techniques, or gamma cameras, have been proposed. These techniques are extremely effective for identifying "hot spots" or regions where radioactive isotopes are accumulated. Examples of such would be nuclear-disaster-affected areas such as Fukushima or the vicinity of nuclear reactors. However, the images acquired with a gamma camera do not include distance information between radioactive isotopes and the camera, and hence are "degenerated" in the direction of the isotopes. Moreover, depth information in the images is lost when the isotopes are embedded in materials, such as water, sand, and concrete. Here, we propose two methods of obtaining depth information of radioactive isotopes embedded in materials by comparing (1) their spectra and (2) images of incident gamma rays scattered by the materials and direct gamma rays. In the first method, the spectra of radioactive isotopes and the ratios of scattered to direct gamma rays are obtained. We verify experimentally that the ratio increases with increasing depth, as predicted by simulations. Although the method using energy spectra has been studied for a long time, an advantage of our method is the use of low-energy (50-150 keV) photons as scattered gamma rays. In the second method, the spatial extent of images obtained for direct and scattered gamma rays is compared. By performing detailed Monte Carlo simulations using Geant4, we verify that the spatial extent of the position where gamma rays are scattered increases with increasing depth. To demonstrate this, we are developing various gamma cameras to compare low-energy (scattered) gamma-ray images with fully photo-absorbed gamma-ray images. We also demonstrate that the 3D reconstruction of isotopes/hotspots is possible with our proposed methods. These methods have potential applications in the medical fields, and in severe environments such as the nuclear-disaster-affected areas in Fukushima.

  8. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  9. Guided wave-based J-integral estimation for dynamic stress intensity factors using 3D scanning laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Ayers, J.; Owens, C. T.; Liu, K. C.; Swenson, E.; Ghoshal, A.; Weiss, V.

    2013-01-01

    The application of guided waves to interrogate remote areas of structural components has been researched extensively in characterizing damage. However, there exists a sparsity of work in using piezoelectric transducer-generated guided waves as a method of assessing stress intensity factors (SIF). This quantitative information enables accurate estimation of the remaining life of metallic structures exhibiting cracks, such as military and commercial transport vehicles. The proposed full wavefield approach, based on 3D laser vibrometry and piezoelectric transducer-generated guided waves, provides a practical means for estimation of dynamic stress intensity factors (DSIF) through local strain energy mapping via the J-integral. Strain energies and traction vectors can be conveniently estimated from wavefield data recorded using 3D laser vibrometry, through interpolation and subsequent spatial differentiation of the response field. Upon estimation of the Jintegral, it is possible to obtain the corresponding DSIF terms. For this study, the experimental test matrix consists of aluminum plates with manufactured defects representing canonical elliptical crack geometries under uniaxial tension that are excited by surface mounted piezoelectric actuators. The defects' major to minor axes ratios vary from unity to approximately 133. Finite element simulations are compared to experimental results and the relative magnitudes of the J-integrals are examined.

  10. Evaluation of Scalar Value Estimation Techniques for 3D Medical Imaging

    DTIC Science & Technology

    1991-12-01

    our hearts, we grew stronger in our faith during this time. Thank you Lord for making this a learning and at times an enjoyable experience for me and...Figure .5.3. Cell subdivision. factor 2. with t rieubic interpolation estimating minor- voxel valhte. and marchins cubes extraction of hyperbolol’i surface

  11. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  12. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  13. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  14. ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation

    NASA Technical Reports Server (NTRS)

    Richardson, A. O.

    1996-01-01

    This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.

  15. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  16. Instantaneous helical axis estimation from 3-D video data in neck kinematics for whiplash diagnostics.

    PubMed

    Woltring, H J; Long, K; Osterbauer, P J; Fuhr, A W

    1994-12-01

    To date, the diagnosis of whiplash injuries has been very difficult and largely based on subjective, clinical assessment. The work by Winters and Peles Multiple Muscle Systems--Biomechanics and Movement Organization. Springer, New York (1990) suggests that the use of finite helical axes (FHAs) in the neck may provide an objective assessment tool for neck mobility. Thus, the position of the FHA describing head-trunk motion may allow discrimination between normal and pathological cases such as decreased mobility in particular cervical joints. For noisy, unsmoothed data, the FHAs must be taken over rather large angular intervals if the FHAs are to be reconstructed with sufficient accuracy; in the Winters and Peles study, these intervals were approximately 10 degrees. in order to study the movements' microstructure, the present investigation uses instantaneous helical axes (IHAs) estimated from low-pass smoothed video data. Here, the small-step noise sensitivity of the FHA no longer applies, and proper low-pass filtering allows estimation of the IHA even for small rotation velocity omega of the moving neck. For marker clusters mounted on the head and trunk, technical system validation showed that the IHAs direction dispersions were on the order of one degree, while their position dispersions were on the order of 1 mm, for low-pass cut-off frequencies of a few Hz (the dispersions were calculated from omega-weighted errors, in order to account for the adverse effects of vanishing omega). Various simple, planar models relating the instantaneous, 2-D centre of rotation with the geometry and kinematics of a multi-joint neck model are derived, in order to gauge the utility of the FHA and IHA approaches. Some preliminary results on asymptomatic and pathological subjects are provided, in terms of the 'ruled surface' formed by sampled IHAs and of their piercing points through the mid-sagittal plane during a prescribed flexion-extension movement of the neck.

  17. Assessment of intraoperative 3D imaging alternatives for IOERT dose estimation.

    PubMed

    García-Vázquez, Verónica; Marinetto, Eugenio; Guerra, Pedro; Valdivieso-Casique, Manlio Fabio; Calvo, Felipe Ángel; Alvarado-Vásquez, Eduardo; Sole, Claudio Vicente; Vosburgh, Kirby Gannett; Desco, Manuel; Pascau, Javier

    2016-08-23

    Intraoperative electron radiation therapy (IOERT) involves irradiation of an unresected tumour or a post-resection tumour bed. The dose distribution is calculated from a preoperative computed tomography (CT) study acquired using a CT simulator. However, differences between the actual IOERT field and that calculated from the preoperative study arise as a result of patient position, surgical access, tumour resection and the IOERT set-up. Intraoperative CT imaging may then enable a more accurate estimation of dose distribution. In this study, we evaluated three kilovoltage (kV) CT scanners with the ability to acquire intraoperative images. Our findings indicate that current IOERT plans may be improved using data based on actual anatomical conditions during radiation. The systems studied were two portable systems ("O-arm", a cone-beam CT [CBCT] system, and "BodyTom", a multislice CT [MSCT] system) and one CBCT integrated in a conventional linear accelerator (LINAC) ("TrueBeam"). TrueBeam and BodyTom showed good results, as the gamma pass rates of their dose distributions compared to the gold standard (dose distributions calculated from images acquired with a CT simulator) were above 97% in most cases. The O-arm yielded a lower percentage of voxels fulfilling gamma criteria owing to its reduced field of view (which left it prone to truncation artefacts). Our results show that the images acquired using a portable CT or even a LINAC with on-board kV CBCT could be used to estimate the dose of IOERT and improve the possibility to evaluate and register the treatment administered to the patient.

  18. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  19. Reproducing electric field observations during magnetic storms by means of rigorous 3-D modelling and distortion matrix co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2014-12-01

    Electric fields induced in the conducting Earth by geomagnetic disturbances drive currents in power transmission grids, telecommunication lines or buried pipelines, which can cause service disruptions. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we revisit a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a magnetospheric source model described by low-degree spherical harmonics from observatory magnetic data. The actual electric field, however, is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and modelled electric fields. Using data of six magnetic storms that occurred between 2000 and 2003, we estimate distortion matrices for observatory sites onshore and on the ocean bottom. Reliable estimates are obtained, and the modellings are found to explain up to 90% of the measurements. We further find that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of the shape of electric field time series during magnetic storms. Since the method relies on precomputed responses of a 3-D Earth to geomagnetic disturbances, which can be recycled for each storm, the required computational resources are negligible. Our approach is thus suitable for real-time prediction of geomagnetically induced currents by combining it with reliable forecasts of the source field.

  20. A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Shamsil, Arefin; Escoto, Abelardo; Naish, Michael D.; Patel, Rajni V.

    2016-03-01

    Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = -0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.

  1. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  2. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    NASA Astrophysics Data System (ADS)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  3. Estimation of Pulmonary Motion in Healthy Subjects and Patients with Intrathoracic Tumors Using 3D-Dynamic MRI: Initial Results

    PubMed Central

    Schoebinger, Max; Herth, Felix; Tuengerthal, Siegfried; Meinzer, Heinz-Peter; Kauczor, Hans-Ulrich

    2009-01-01

    Objective To estimate a new technique for quantifying regional lung motion using 3D-MRI in healthy volunteers and to apply the technique in patients with intra- or extrapulmonary tumors. Materials and Methods Intraparenchymal lung motion during a whole breathing cycle was quantified in 30 healthy volunteers using 3D-dynamic MRI (FLASH [fast low angle shot] 3D, TRICKS [time-resolved interpolated contrast kinetics]). Qualitative and quantitative vector color maps and cumulative histograms were performed using an introduced semiautomatic algorithm. An analysis of lung motion was performed and correlated with an established 2D-MRI technique for verification. As a proof of concept, the technique was applied in five patients with non-small cell lung cancer (NSCLC) and 5 patients with malignant pleural mesothelioma (MPM). Results The correlation between intraparenchymal lung motion of the basal lung parts and the 2D-MRI technique was significant (r = 0.89, p < 0.05). Also, the vector color maps quantitatively illustrated regional lung motion in all healthy volunteers. No differences were observed between both hemithoraces, which was verified by cumulative histograms. The patients with NSCLC showed a local lack of lung motion in the area of the tumor. In the patients with MPM, there was global diminished motion of the tumor bearing hemithorax, which improved siginificantly after chemotherapy (CHT) (assessed by the 2D- and 3D-techniques) (p < 0.01). Using global spirometry, an improvement could also be shown (vital capacity 2.9 ± 0.5 versus 3.4 L ± 0.6, FEV1 0.9 ± 0.2 versus 1.4 ± 0.2 L) after CHT, but this improvement was not significant. Conclusion A 3D-dynamic MRI is able to quantify intraparenchymal lung motion. Local and global parenchymal pathologies can be precisely located and might be a new tool used to quantify even slight changes in lung motion (e.g. in therapy monitoring, follow-up studies or even benign lung diseases). PMID:19885311

  4. On the Estimation Accuracy of the 3D Body Center of Mass Trajectory during Human Locomotion: Inverse vs. Forward Dynamics

    PubMed Central

    Pavei, Gaspare; Seminati, Elena; Cazzola, Dario; Minetti, Alberto E.

    2017-01-01

    The dynamics of body center of mass (BCoM) 3D trajectory during locomotion is crucial to the mechanical understanding of the different gaits. Forward Dynamics (FD) obtains BCoM motion from ground reaction forces while Inverse Dynamics (ID) estimates BCoM position and speed from motion capture of body segments. These two techniques are widely used by the literature on the estimation of BCoM. Despite the specific pros and cons of both methods, FD is less biased and considered as the golden standard, while ID estimates strongly depend on the segmental model adopted to schematically represent the moving body. In these experiments a single subject walked, ran, (uni- and bi-laterally) skipped, and race-walked at a wide range of speeds on a treadmill with force sensors underneath. In all conditions a simultaneous motion capture (8 cameras, 36 markers) took place. 3D BCoM trajectories computed according to five marker set models of ID have been compared to the one obtained by FD on the same (about 2,700) strides. Such a comparison aims to check the validity of the investigated models to capture the “true” dynamics of gaits in terms of distance between paths, mechanical external work and energy recovery. Results allow to conclude that: (1) among gaits, race walking is the most critical in being described by ID, (2) among the investigated segmental models, those capturing the motion of four limbs and trunk more closely reproduce the subtle temporal and spatial changes of BCoM trajectory within the strides of most gaits, (3) FD-ID discrepancy in external work is speed dependent within a gait in the most unsuccessful models, and (4) the internal work is not affected by the difference in BCoM estimates. PMID:28337148

  5. 3D Wind Reconstruction and Turbulence Estimation in the Boundary Layer from Doppler Lidar Measurements using Particle Method

    NASA Astrophysics Data System (ADS)

    Rottner, L.; Baehr, C.

    2014-12-01

    Turbulent phenomena in the atmospheric boundary layer (ABL) are characterized by small spatial and temporal scales which make them difficult to observe and to model.New remote sensing instruments, like Doppler Lidar, give access to fine and high-frequency observations of wind in the ABL. This study suggests to use a method of nonlinear estimation based on these observations to reconstruct 3D wind in a hemispheric volume, and to estimate atmospheric turbulent parameters. The wind observations are associated to particle systems which are driven by a local turbulence model. The particles have both fluid and stochastic properties. Therefore, spatial averages and covariances may be deduced from the particles. Among the innovative aspects, we point out the absence of the common hypothesis of stationary-ergodic turbulence and the non-use of particle model closure hypothesis. Every time observations are available, 3D wind is reconstructed and turbulent parameters such as turbulent kinectic energy, dissipation rate, and Turbulent Intensity (TI) are provided. This study presents some results obtained using real wind measurements provided by a five lines of sight Lidar. Compared with classical methods (e.g. eddy covariance) our technic renders equivalent long time results. Moreover it provides finer and real time turbulence estimations. To assess this new method, we suggest computing independently TI using different observation types. First anemometer data are used to have TI reference.Then raw and filtered Lidar observations have also been compared. The TI obtained from raw data is significantly higher than the reference one, whereas the TI estimated with the new algorithm has the same order.In this study we have presented a new class of algorithm to reconstruct local random media. It offers a new way to understand turbulence in the ABL, in both stable or convective conditions. Later, it could be used to refine turbulence parametrization in meteorological meso-scale models.

  6. On the Estimation Accuracy of the 3D Body Center of Mass Trajectory during Human Locomotion: Inverse vs. Forward Dynamics.

    PubMed

    Pavei, Gaspare; Seminati, Elena; Cazzola, Dario; Minetti, Alberto E

    2017-01-01

    The dynamics of body center of mass (BCoM) 3D trajectory during locomotion is crucial to the mechanical understanding of the different gaits. Forward Dynamics (FD) obtains BCoM motion from ground reaction forces while Inverse Dynamics (ID) estimates BCoM position and speed from motion capture of body segments. These two techniques are widely used by the literature on the estimation of BCoM. Despite the specific pros and cons of both methods, FD is less biased and considered as the golden standard, while ID estimates strongly depend on the segmental model adopted to schematically represent the moving body. In these experiments a single subject walked, ran, (uni- and bi-laterally) skipped, and race-walked at a wide range of speeds on a treadmill with force sensors underneath. In all conditions a simultaneous motion capture (8 cameras, 36 markers) took place. 3D BCoM trajectories computed according to five marker set models of ID have been compared to the one obtained by FD on the same (about 2,700) strides. Such a comparison aims to check the validity of the investigated models to capture the "true" dynamics of gaits in terms of distance between paths, mechanical external work and energy recovery. Results allow to conclude that: (1) among gaits, race walking is the most critical in being described by ID, (2) among the investigated segmental models, those capturing the motion of four limbs and trunk more closely reproduce the subtle temporal and spatial changes of BCoM trajectory within the strides of most gaits, (3) FD-ID discrepancy in external work is speed dependent within a gait in the most unsuccessful models, and (4) the internal work is not affected by the difference in BCoM estimates.

  7. Influence of the Alveolar Cleft Type on Preoperative Estimation Using 3D CT Assessment for Alveolar Cleft

    PubMed Central

    Choi, Hang Suk; Choi, Hyun Gon; Kim, Soon Heum; Park, Hyung Jun; Shin, Dong Hyeok; Jo, Dong In; Kim, Cheol Keun

    2012-01-01

    Background The bone graft for the alveolar cleft has been accepted as one of the essential treatments for cleft lip patients. Precise preoperative measurement of the architecture and size of the bone defect in alveolar cleft has been considered helpful for increasing the success rate of bone grafting because those features may vary with the cleft type. Recently, some studies have reported on the usefulness of three-dimensional (3D) computed tomography (CT) assessment of alveolar bone defect; however, no study on the possible implication of the cleft type on the difference between the presumed and actual value has been conducted yet. We aimed to evaluate the clinical predictability of such measurement using 3D CT assessment according to the cleft type. Methods The study consisted of 47 pediatric patients. The subjects were divided according to the cleft type. CT was performed before the graft operation and assessed using image analysis software. The statistical significance of the difference between the preoperative estimation and intraoperative measurement was analyzed. Results The difference between the preoperative and intraoperative values were -0.1±0.3 cm3 (P=0.084). There was no significant intergroup difference, but the groups with a cleft palate showed a significant difference of -0.2±0.3 cm3 (P<0.05). Conclusions Assessment of the alveolar cleft volume using 3D CT scan data and image analysis software can help in selecting the optimal graft procedure and extracting the correct volume of cancellous bone for grafting. Considering the cleft type, it would be helpful to extract an additional volume of 0.2 cm3 in the presence of a cleft palate. PMID:23094242

  8. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    NASA Astrophysics Data System (ADS)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  9. Effects of computing parameters and measurement locations on the estimation of 3D NPS in non-stationary MDCT images.

    PubMed

    Miéville, Frédéric A; Bolard, Gregory; Bulling, Shelley; Gudinchet, François; Bochud, François O; Verdun, François R

    2013-11-01

    The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.

  10. Reproducing Electric Field Observations during Magnetic Storms by means of Rigorous 3-D Modelling and Distortion Matrix Co-estimation

    NASA Astrophysics Data System (ADS)

    Püthe, Christoph; Manoj, Chandrasekharan; Kuvshinov, Alexey

    2015-04-01

    Electric fields induced in the conducting Earth during magnetic storms drive currents in power transmission grids, telecommunication lines or buried pipelines. These geomagnetically induced currents (GIC) can cause severe service disruptions. The prediction of GIC is thus of great importance for public and industry. A key step in the prediction of the hazard to technological systems during magnetic storms is the calculation of the geoelectric field. To address this issue for mid-latitude regions, we developed a method that involves 3-D modelling of induction processes in a heterogeneous Earth and the construction of a model of the magnetospheric source. The latter is described by low-degree spherical harmonics; its temporal evolution is derived from observatory magnetic data. Time series of the electric field can be computed for every location on Earth's surface. The actual electric field however is known to be perturbed by galvanic effects, arising from very local near-surface heterogeneities or topography, which cannot be included in the conductivity model. Galvanic effects are commonly accounted for with a real-valued time-independent distortion matrix, which linearly relates measured and computed electric fields. Using data of various magnetic storms that occurred between 2000 and 2003, we estimated distortion matrices for observatory sites onshore and on the ocean bottom. Strong correlations between modellings and measurements validate our method. The distortion matrix estimates prove to be reliable, as they are accurately reproduced for different magnetic storms. We further show that 3-D modelling is crucial for a correct separation of galvanic and inductive effects and a precise prediction of electric field time series during magnetic storms. Since the required computational resources are negligible, our approach is suitable for a real-time prediction of GIC. For this purpose, a reliable forecast of the source field, e.g. based on data from satellites

  11. A hybrid 3D-Var data assimilation scheme for joint state and parameter estimation: application to morphodynamic modelling

    NASA Astrophysics Data System (ADS)

    Smith, P.; Nichols, N. K.; Dance, S.

    2011-12-01

    Data assimilation is typically used to provide initial conditions for state estimation; combining model predictions with observational data to produce an updated model state that most accurately characterises the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. However, even with perfect initial data, inaccurate representation of model parameters will lead to the growth of model error and therefore affect the ability of our model to accurately predict the true system state. A key question in model development is how to estimate parameters a priori. In most cases, parameter estimation is addressed as a separate issue to state estimation and model calibration is performed offline in a separate calculation. Here we demonstrate how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state as part of the assimilation process. We present a novel hybrid data assimilation algorithm developed for application to parameter estimation in morphodynamic models. The new approach is based on a computationally inexpensive 3D-Var scheme, where the specification of the covariance matrices is crucial for success. For combined state-parameter estimation, it is particularly important that the cross-covariances between the parameters and the state are given a good a priori specification. Early experiments indicated that in order to yield reliable estimates of the true parameters, a flow dependent representation of the state-parameter cross covariances is required. By combining ideas from 3D-Var and the extended Kalman filter we have developed a novel hybrid assimilation scheme that captures the flow dependent nature of the state-parameter cross covariances without the computational expense of explicitly propagating the full system covariance matrix. We will give details of the formulation of this

  12. Towards patient-specific modeling of mitral valve repair: 3D transesophageal echocardiography-derived parameter estimation.

    PubMed

    Zhang, Fan; Kanik, Jingjing; Mansi, Tommaso; Voigt, Ingmar; Sharma, Puneet; Ionasec, Razvan Ioan; Subrahmanyan, Lakshman; Lin, Ben A; Sugeng, Lissa; Yuh, David; Comaniciu, Dorin; Duncan, James

    2017-01-01

    Transesophageal echocardiography (TEE) is routinely used to provide important qualitative and quantitative information regarding mitral regurgitation. Contemporary planning of surgical mitral valve repair, however, still relies heavily upon subjective predictions based on experience and intuition. While patient-specific mitral valve modeling holds promise, its effectiveness is limited by assumptions that must be made about constitutive material properties. In this paper, we propose and develop a semi-automated framework that combines machine learning image analysis with geometrical and biomechanical models to build a patient-specific mitral valve representation that incorporates image-derived material properties. We use our computational framework, along with 3D TEE images of the open and closed mitral valve, to estimate values for chordae rest lengths and leaflet material properties. These parameters are initialized using generic values and optimized to match the visualized deformation of mitral valve geometry between the open and closed states. Optimization is achieved by minimizing the summed Euclidean distances between the estimated and image-derived closed mitral valve geometry. The spatially varying material parameters of the mitral leaflets are estimated using an extended Kalman filter to take advantage of the temporal information available from TEE. This semi-automated and patient-specific modeling framework was tested on 15 TEE image acquisitions from 14 patients. Simulated mitral valve closures yielded average errors (measured by point-to-point Euclidean distances) of 1.86 ± 1.24 mm. The estimated material parameters suggest that the anterior leaflet is stiffer than the posterior leaflet and that these properties vary between individuals, consistent with experimental observations described in the literature.

  13. Extension of the Optimized Virtual Fields Method to estimate viscoelastic material parameters from 3D dynamic displacement fields

    PubMed Central

    Connesson, N.; Clayton, E.H.; Bayly, P.V.; Pierron, F.

    2015-01-01

    In-vivo measurement of the mechanical properties of soft tissues is essential to provide necessary data in biomechanics and medicine (early cancer diagnosis, study of traumatic brain injuries, etc.). Imaging techniques such as Magnetic Resonance Elastography (MRE) can provide 3D displacement maps in the bulk and in vivo, from which, using inverse methods, it is then possible to identify some mechanical parameters of the tissues (stiffness, damping etc.). The main difficulties in these inverse identification procedures consist in dealing with the pressure waves contained in the data and with the experimental noise perturbing the spatial derivatives required during the processing. The Optimized Virtual Fields Method (OVFM) [1], designed to be robust to noise, present natural and rigorous solution to deal with these problems. The OVFM has been adapted to identify material parameter maps from Magnetic Resonance Elastography (MRE) data consisting of 3-dimensional displacement fields in harmonically loaded soft materials. In this work, the method has been developed to identify elastic and viscoelastic models. The OVFM sensitivity to spatial resolution and to noise has been studied by analyzing 3D analytically simulated displacement data. This study evaluates and describes the OVFM identification performances: different biases on the identified parameters are induced by the spatial resolution and experimental noise. The well-known identification problems in the case of quasi-incompressible materials also find a natural solution in the OVFM. Moreover, an a posteriori criterion to estimate the local identification quality is proposed. The identification results obtained on actual experiments are briefly presented. PMID:26146416

  14. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    NASA Astrophysics Data System (ADS)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  15. Neural network system for 3-D object recognition and pose estimation from a single arbitrary 2-D view

    NASA Astrophysics Data System (ADS)

    Khotanzad, Alireza R.; Liou, James H.

    1992-09-01

    In this paper, a robust, and fast system for recognition as well as pose estimation of a 3-D object from a single 2-D perspective of it taken from an arbitrary viewpoint is developed. The approach is invariant to location, orientation, and scale of the object in the perspective. The silhouette of the object in the 2-D perspective is first normalized with respect to location and scale. A set of rotation invariant features derived from complex and orthogonal pseudo- Zernike moments of the image are then extracted. The next stage includes a bank of multilayer feed-forward neural networks (NN) each of which classifies the extracted features. The training set for these nets consists of perspective views of each object taken from several different viewing angles. The NNs in the bank differ in the size of their hidden layer nodes as well as their initial conditions but receive the same input. The classification decisions of all the nets are combined through a majority voting scheme. It is shown that this collective decision making yields better results compared to a single NN operating alone. After the object is classified, two of its pose parameters, namely elevation and aspect angles, are estimated by another module of NNs in a two-stage process. The first stage identifies the likely region of the space that the object is being viewed from. In the second stage, an NN estimator for the identified region is used to compute the pose angles. Extensive experimental studies involving clean and noisy images of seven military ground vehicles are carried out. The performance is compared to two other traditional methods, namely a nearest neighbor rule and a binary decision tree classifier and it is shown that our approach has major advantages over them.

  16. Intracranial haemodynamics during vasomotor stress test in unilateral internal carotid artery occlusion estimated by 3-D transcranial Doppler scanner.

    PubMed

    Zbornikova, V; Lassvik, C; Hillman, J

    1995-04-01

    Seventeen patients, 14 males and 3 females, mean age 64 years (range 45-77 years) with longstanding unilateral occlusion of the internal carotid artery and minimal neurological deficit, were evaluated in order to find criteria for potential benefit of extracranial-intracranial by-pass surgery. 3-D transcranial Doppler was used for estimation of mean velocities and pulsatility index in the middle cerebral artery, anterior cerebral artery and posterior cerebral artery before and after iv injection of 1 g acetazolamide. The anterior cerebral artery was the supplying vessel to the occluded side in 16 patients and mean velocities were significantly (p < 0.001) faster on the occluded (59.3 +/- 14.5 cm sec-1) and nonoccluded (91.6 +/- 29.6 cm sec-1, p < 0.05)) side than those found in the middle cerebral artery (39.2 +/- 13.7 and 50.9 +/- 8.5 cm sec-1). In two patients a decrease of mean velocity after acetazolamide was noted in middle cerebral artery indicating 'steal' effect. In another 4 patients, poor vasomotor response was seen with less than 11% of mean velocity increase in the middle cerebral artery. Differences between posterior cerebral artery on the occluded and nonoccluded side were insignificant as well as those between middle and posterior on the occluded side. Resting values of pulsatility index differed significantly (p < 0.01) only between anterior and posterior cerebral artery on the nonoccluded side.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  18. Rapid review: Estimates of incremental breast cancer detection from tomosynthesis (3D-mammography) screening in women with dense breasts.

    PubMed

    Houssami, Nehmat; Turner, Robin M

    2016-12-01

    High breast tissue density increases breast cancer (BC) risk, and the risk of an interval BC in mammography screening. Density-tailored screening has mostly used adjunct imaging to screen women with dense breasts, however, the emergence of tomosynthesis (3D-mammography) provides an opportunity to steer density-tailored screening in new directions potentially obviating the need for adjunct imaging. A rapid review (a streamlined evidence synthesis) was performed to summarise data on tomosynthesis screening in women with heterogeneously dense or extremely dense breasts, with the aim of estimating incremental (additional) BC detection attributed to tomosynthesis in comparison with standard 2D-mammography. Meta-analysed data from prospective trials comparing these mammography modalities in the same women (N = 10,188) in predominantly biennial screening showed significant incremental BC detection of 3.9/1000 screens attributable to tomosynthesis (P < 0.001). Studies comparing different groups of women screened with tomosynthesis (N = 103,230) or with 2D-mammography (N = 177,814) yielded a pooled difference in BC detection of 1.4/1000 screens representing significantly higher BC detection in tomosynthesis-screened women (P < 0.001), and a pooled difference for recall of -23.3/1000 screens representing significantly lower recall in tomosynthesis-screened groups (P < 0.001), than for 2D-mammography. These estimates can inform planning of future trials of density-tailored screening and may guide discussion of screening women with dense breasts.

  19. A variational Data Assimilation algorithm to better estimate the salinity for the Berre lagoon with Telemac3D

    NASA Astrophysics Data System (ADS)

    Ricci, S. M.; Piacentini, A.; Riadh, A.; Goutal, N.; Razafindrakoto, E.; Zaoui, F.; Gant, M.; Morel, T.; Duchaine, F.; Thual, O.

    2012-12-01

    The Berre lagoon is a receptacle of 1000Mm3 where salty sea water meets fresh water discharged by the hydroelectric plant at Saint-Chamas and by natural tributaries (Arc and Touloubre rivers). Improving the quality of the simulation of the hydrodynamics of the lagoon with TELEMAC 3D, EDF R&D at LNHE aims at optimizing the operation of the hydroelectric production while preserving the lagoon ecosystem. To do so and in a collaborative framework with CERFACS, a data assimilation (DA) algorithm is being implemented, using the Open-Palm coupler, to make the most of continuous (every 15 min) and in-situ salinity measurements at 4 locations in the lagoon. Preliminary studies were carried out to quantify the difference between a reference simulation and the observations on a test period. It was shown that the model is able to relatively well represent the evolution of the salinity field at the observating stations, given some adjustements on the forcing near Caronte. Still, discrepancies up to several g/l remain and could be corrected with the DA algorithm. Additionally, some numerical features should be fixed to insure the robustness of the code with respect to compiling plateforms and parallel computing. Similarly to the meteorological and oceanographic approaches, the observations are used sequentially to update the hydrodynamical state. More specifically, a 3D-FGAT algorithm is used to correct the salinity state at the beginning of an assimilation window. This variational algorithm lies on the hypothesis that the tangent linear physics can be approximated by a persistent model over a chosen time window. Sensitivity tests on a reference run showed that in order to cope with this constraint, the analysis time window should be at most 3h. For instance, it was show that a local positive salinity increment of 0.5 g/l introduced at -5m is dissipated by the numerical model over 1 day (physical and numerical diffusion mostly) (Figure a). Using an average estimate of the

  20. Advances in 3D soil mapping and water content estimation using multi-channel ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.

    2011-12-01

    Multi-channel ground-penetrating radar systems have recently become widely available, thereby opening new possibilities for shallow imaging of the subsurface. One advantage of these systems is that they can significantly reduce survey times by simultaneously collecting multiple lines of GPR reflection data. As a result, it is becoming more practical to complete 3D surveys - particularly in situations where the subsurface undergoes rapid changes, e.g., when monitoring infiltration and redistribution of water in soils. While 3D and 4D surveys can provide a degree of clarity that significantly improves interpretation of the subsurface, an even more powerful feature of the new multi-channel systems for hydrologists is their ability to collect data using multiple antenna offsets. Central mid-point (CMP) surveys have been widely used to estimate radar wave velocities, which can be related to water contents, by sequentially increasing the distance, i.e., offset, between the source and receiver antennas. This process is highly labor intensive using single-channel systems and therefore such surveys are often only performed at a few locations at any given site. In contrast, with multi-channel GPR systems it is possible to physically arrange an array of antennas at different offsets, such that a CMP-style survey is performed at every point along a radar transect. It is then possible to process this data to obtain detailed maps of wave velocity with a horizontal resolution on the order of centimeters. In this talk I review concepts underlying multi-channel GPR imaging with an emphasis on multi-offset profiling for water content estimation. Numerical simulations are used to provide examples that illustrate situations where multi-offset GPR profiling is likely to be successful, with an emphasis on considering how issues like noise, soil heterogeneity, vertical variations in water content and weak reflection returns affect algorithms for automated analysis of the data. Overall

  1. Estimation of environmental capacity of phosphorus in Gorgan Bay, Iran, via a 3D ecological-hydrodynamic model.

    PubMed

    Ranjbar, Mohammad Hassan; Hadjizadeh Zaker, Nasser

    2016-11-01

    Gorgan Bay is a semi-enclosed basin located in the southeast of the Caspian Sea in Iran and is an important marine habitat for fish and seabirds. In the present study, the environmental capacity of phosphorus in Gorgan Bay was estimated using a 3D ecological-hydrodynamic numerical model and a linear programming model. The distribution of phosphorus, simulated by the numerical model, was used as an index for the occurrence of eutrophication and to determine the water quality response field of each of the pollution sources. The linear programming model was used to calculate and allocate the total maximum allowable loads of phosphorus to each of the pollution sources in a way that eutrophication be prevented and at the same time maximum environmental capacity be achieved. In addition, the effect of an artificial inlet on the environmental capacity of the bay was investigated. Observations of surface currents in Gorgan Bay were made by GPS-tracked surface drifters to provide data for calibration and verification of numerical modeling. Drifters were deployed at five different points across the bay over a period of 5 days. The results indicated that the annual environmental capacity of phosphorus is approximately 141 t if a concentration of 0.0477 mg/l for phosphorus is set as the water quality criterion. Creating an artificial inlet with a width of 1 km in the western part of the bay would result in a threefold increase in the environmental capacity of the study area.

  2. Estimation of bisphenol A-Human toxicity by 3D cell culture arrays, high throughput alternatives to animal tests.

    PubMed

    Lee, Dong Woo; Oh, Woo-Yeon; Yi, Sang Hyun; Ku, Bosung; Lee, Moo-Yeal; Cho, Yoon Hee; Yang, Mihi

    2016-09-30

    Bisphenol A (BPA) has been widely used for manufacturing polycarbonate plastics and epoxy resins and has been extensively tested in animals to predict human toxicity. In order to reduce the use of animals for toxicity assessment and provide further accurate information on BPA toxicity in humans, we encapsulated Hep3B human hepatoma cells in alginate and cultured them in three dimensions (3D) on a micropillar chip coupled to a panel of metabolic enzymes on a microwell chip. As a result, we were able to assess the toxicity of BPA under various metabolic enzyme conditions using a high-throughput and micro assay; sample volumes were nearly 2,000 times less than that required for a 96-well plate. We applied a total of 28 different enzymes to each chip, including 10 cytochrome P450s (CYP450s), 10 UDP-glycosyltransferases (UGTs), 3 sulfotransferases (SULTs), alcohol dehydrogenase (ADH), and aldehyde dehydrogenase 2 (ALDH2). Phase I enzyme mixtures, phase II enzyme mixtures, and a combination of phase I and phase II enzymes were also applied to the chip. BPA toxicity was higher in samples containing CYP2E1 than controls, which contained no enzymes (IC50, 184±16μM and 270±25.8μM, respectively, p<0.01). However, BPA-induced toxicity was alleviated in the presence of ADH (IC50, 337±17.9μM), ALDH2 (335±13.9μM), and SULT1E1 (318±17.7μM) (p<0.05). CYP2E1-mediated cytotoxicity was confirmed by quantifying unmetabolized BPA using HPLC/FD. Therefore, we suggest the present micropillar/microwell chip platform as an effective alternative to animal testing for estimating BPA toxicity via human metabolic systems.

  3. Potential Geophysical Field Transformations and Combined 3D Modelling for Estimation the Seismic Site Effects on Example of Israel

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev; Meirova, Tatiana

    2015-04-01

    , EGU2014-2424, Vienna, Austria, 1-5. Eppelbaum, L.V. and Katz, Y.I., 2014b. First Maps of Mesozoic and Cenozoic Structural-Sedimentation Floors of the Easternmost Mediterranean and their Relationship with the Deep Geophysical-Geological Zonation. Proceed. of the 19th Intern. Congress of Sedimentologists, Geneva, Switzerland, 1-3. Eppelbaum, L.V. and Katz, Yu.I., 2015a. Newly Developed Paleomagnetic Map of the Easternmost Mediterranean Unmasks Geodynamic History of this Region. Central European Jour. of Geosciences, 6, No. 4 (in Press). Eppelbaum, L.V. and Katz, Yu.I., 2015b. Application of Integrated Geological-Geophysical Analysis for Development of Paleomagnetic Maps of the Easternmost Mediterranean. In: (Eppelbaum L., Ed.), New Developments in Paleomagnetism Research, Nova Publisher, NY (in Press). Eppelbaum, L.V. and Khesin, B.E., 2004. Advanced 3-D modelling of gravity field unmasks reserves of a pyrite-polymetallic deposit: A case study from the Greater Caucasus. First Break, 22, No. 11, 53-56. Eppelbaum, L.V., Nikolaev, A.V. and Katz, Y.I., 2014. Space location of the Kiama paleomagnetic hyperzone of inverse polarity in the crust of the eastern Mediterranean. Doklady Earth Sciences (Springer), 457, No. 6, 710-714. Haase, J.S., Park, C.H., Nowack, R.L. and Hill, J.R., 2010. Probabilistic seismic hazard estimates incorporating site effects - An example from Indiana, U.S.A. Environmental and Engineering Geoscience, 16, No. 4, 369-388. Hough, S.E., Borcherdt, R. D., Friberg, P. A., Busby, R., Field, E. and Jacob, K. N., 1990. The role of sediment-induced amplification in the collapse of the Nimitz freeway. Nature, 344, 853-855. Khesin, B.E. Alexeyev, V.V. and Eppelbaum, L.V., 1996. Interpretation of Geophysical Fields in Complicated Environments. Kluwer Academic Publ., Ser.: Advanced Appr. in Geophysics, Dordrecht - London - Boston. Klokočník, J., Kostelecký, J., Eppelbaum, L. and Bezděk, A., 2014. Gravity Disturbances, the Marussi Tensor, Invariants and

  4. Comparison of 2D and 3D modeled tumor motion estimation/prediction for dynamic tumor tracking during arc radiotherapy.

    PubMed

    Liu, Wu; Ma, Xiangyu; Yan, Huagang; Chen, Zhe; Nath, Ravinder; Li, Haiyun

    2017-03-06

    Many real-time imaging techniques have been developed to localize the target in 3D space or in 2D beam's eye view (BEV) plane for intrafraction motion tracking in radiation therapy. With tracking system latency, 3D-modeled method is expected to be more accurate even in terms of 2D BEV tracking error. No quantitative analysis, however, has been reported. In this study, we simulated co-planar arc deliveries using respiratory motion data acquired from 42 patients to quantitatively compare the accuracy between 2D BEV and 3D-modeled tracking in arc therapy and determine whether 3D information is needed for motion tracking. We used our previously developed low kV dose adaptive MV-kV imaging and motion compensation framework as a representative of 3D-modeled methods. It optimizes the balance between additional kV imaging dose and 3D tracking accuracy and solves the MLC blockage issue. With simulated Gaussian marker detection errors (zero mean and 0.39 mm standard deviation) and ~155/310/460 ms tracking system latencies, the mean percentage of time that the target moved >2 mm from the predicted 2D BEV position are 1.1%/4.0%/7.8% and 1.3%/5.8%/11.6% for 3D-modeled and 2D-only tracking, respectively. The corresponding average BEV RMS errors are 0.67/0.90/1.13 mm and 0.79/1.10/1.37 mm. Compared to the 2D method, the 3D method reduced the average RMS unresolved motion along the beam direction from ~3 mm to ~1 mm, resulting on average only <1% dosimetric advantage in the depth direction. Only for a small fraction of the patients, when tracking latency is long, the 3D-modeled method showed significant improvement of BEV tracking accuracy, indicating potential dosimetric advantage. However, if the tracking latency is short (~150 ms or less), those improvements are limited. Therefore, 2D BEV tracking has sufficient targeting accuracy for most clinical cases. The 3D technique is, however, still important in solving the MLC blockage problem during 2D BEV tracking.

  5. Estimating 3D variation in active-layer thickness beneath arctic streams using ground-penetrating radar

    USGS Publications Warehouse

    Brosten, T.R.; Bradford, J.H.; McNamara, J.P.; Gooseff, M.N.; Zarnetske, J.P.; Bowden, W.B.; Johnston, M.E.

    2009-01-01

    We acquired three-dimensional (3D) ground-penetrating radar (GPR) data across three stream sites on the North Slope, AK, in August 2005, to investigate the dependence of thaw depth on channel morphology. Data were migrated with mean velocities derived from multi-offset GPR profiles collected across a stream section within each of the 3D survey areas. GPR data interpretations from the alluvial-lined stream site illustrate greater thaw depths beneath riffle and gravel bar features relative to neighboring pool features. The peat-lined stream sites indicate the opposite; greater thaw depths beneath pools and shallower thaw beneath the connecting runs. Results provide detailed 3D geometry of active-layer thaw depths that can support hydrological studies seeking to quantify transport and biogeochemical processes that occur within the hyporheic zone.

  6. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    DTIC Science & Technology

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust...provided to estimate background motion for optical flow background subtraction. The experiments with the static background showed minute benefit in

  7. Mechanistic and quantitative studies of bystander response in 3D tissues for low-dose radiation risk estimations

    SciTech Connect

    Amundson, Sally A.

    2013-06-12

    We have used the MatTek 3-dimensional human skin model to study the gene expression response of a 3D model to low and high dose low LET radiation, and to study the radiation bystander effect as a function of distance from the site of irradiation with either alpha particles or low LET protons. We have found response pathways that appear to be specific for low dose exposures, that could not have been predicted from high dose studies. We also report the time and distance dependent expression of a large number of genes in bystander tissue. the bystander response in 3D tissues showed many similarities to that described previously in 2D cultured cells, but also showed some differences.

  8. Evaluation of 1D, 2D and 3D nodule size estimation by radiologists for spherical and non-spherical nodules through CT thoracic phantom imaging

    NASA Astrophysics Data System (ADS)

    Petrick, Nicholas; Kim, Hyun J. Grace; Clunie, David; Borradaile, Kristin; Ford, Robert; Zeng, Rongping; Gavrielides, Marios A.; McNitt-Gray, Michael F.; Fenimore, Charles; Lu, Z. Q. John; Zhao, Binsheng; Buckler, Andrew J.

    2011-03-01

    The purpose of this work was to estimate bias in measuring the size of spherical and non-spherical lesions by radiologists using three sizing techniques under a variety of simulated lesion and reconstruction slice thickness conditions. We designed a reader study in which six radiologists estimated the size of 10 synthetic nodules of various sizes, shapes and densities embedded within a realistic anthropomorphic thorax phantom from CT scan data. In this manuscript we report preliminary results for the first four readers (Reader 1-4). Two repeat CT scans of the phantom containing each nodule were acquired using a Philips 16-slice scanner at a 0.8 and 5 mm slice thickness. The readers measured the sizes of all nodules for each of the 40 resulting scans (10 nodules x 2 slice thickness x 2 repeat scans) using three sizing techniques (1D longest in-slice dimension; 2D area from longest in-slice dimension and corresponding longest perpendicular dimension; 3D semi-automated volume) in each of 2 reading sessions. The normalized size was estimated for each sizing method and an inter-comparison of bias among methods was performed. The overall relative biases (standard deviation) of the 1D, 2D and 3D methods for the four readers subset (Readers 1-4) were -13.4 (20.3), -15.3 (28.4) and 4.8 (21.2) percentage points, respectively. The relative biases for the 3D volume sizing method was statistically lower than either the 1D or 2D method (p<0.001 for 1D vs. 3D and 2D vs. 3D).

  9. Modification of Eccentric Gaze-Holding

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Paloski, W. H.; Somers, J. T.; Leigh, R. J.; Wood, S. J.; Kornilova, L.

    2006-01-01

    effects of acceleration on gaze stability was examined during centrifugation (+2 G (sub x) and +2 G (sub z) using a total of 23 subjects. In all of our investigations eccentric gaze-holding was established by having the subjects acquire an eccentric target (+/-30 degrees horizontal, +/- 15 degrees vertical) that was flashed for 750 msec in an otherwise dark room. Subjects were instructed to hold gaze on the remembered position of the flashed target for 20 sec. Immediately following the 20 sec period, subjects were cued to return to the remembered center position and to hold gaze there for an additional 20 sec. Following this 20 sec period the center target was briefly flashed and the subject made any corrective eye movement back to the true center position. Conventionally, the ability to hold eccentric gaze is estimated by fitting the natural log of centripetal eye drifts by linear regression and calculating the time constant (G) of these slow phases of "gaze-evoked nystagmus". However, because our normative subjects sometimes showed essentially no drift (tau (sub c) = m), statistical estimation and inference on the effect of target direction was performed on values of the decay constant theta = 1/(tau (sub c)) which we found was well modeled by a gamma distribution. Subjects showed substantial variance of their eye drifts, which were centrifugal in approximately 20 % of cases, and > 40% for down gaze. Using the ensuing estimated gamma distributions, we were able to conclude that rightward and leftward gaze holding were not significantly different, but that upward gaze holding was significantly worse than downward (p<0.05). We also concluded that vertical gaze holding was significantly worse than horizontal (p<0.05). In the case of left and right roll, we found that both had a similar improvement to horizontal gaze holding (p<0.05), but didn't have a significant effect on vertical gaze holding. For pitch tilts, both tilt angles significantly decreased gaze-holding ility

  10. The 2011 Eco3D Flight Campaign: Vegetation Structure and Biomass Estimation from Simultaneous SAR, Lidar and Radiometer Measurements

    NASA Technical Reports Server (NTRS)

    Fatoyinbo, Temilola; Rincon, Rafael; Harding, David; Gatebe, Charles; Ranson, Kenneth Jon; Sun, Guoqing; Dabney, Phillip; Roman, Miguel

    2012-01-01

    The Eco3D campaign was conducted in the Summer of 2011. As part of the campaign three unique and innovative NASA Goddard Space Flight Center airborne sensors were flown simultaneously: The Digital Beamforming Synthetic Aperture Radar (DBSAR), the Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) and the Cloud Absorption Radiometer (CAR). The campaign covered sites from Quebec to Southern Florida and thereby acquired data over forests ranging from Boreal to tropical wetlands. This paper describes the instruments and sites covered and presents the first images resulting from the campaign.

  11. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  12. Reconstruction of the ionospheric 3D electron density distribution by assimilation of ionosonde measurements and operational TEC estimations

    NASA Astrophysics Data System (ADS)

    Gerzen, Tatjana; Wilken, Volker; Jakowski, Norbert; Hoque, Mainul M.

    2013-04-01

    New methods to generate maps of the F2 layer peak electron density of the ionosphere (NmF2) and to reconstruct the ionospheric 3D electron density distribution will be presented. For validation, reconstructed NmF2 maps will be compared with peak electron density measurements from independent ionosonde stations. The ionosphere is the ionized part of the upper Earth's atmosphere lying between about 50 km and 1000 km above the Earth's surface. From the applications perspective the electron density, Ne, is certainly one of the most important parameters of the ionosphere because of its strong impact on radio signal propagation. Especially the critical frequency, foF2, which is related to the F2 layer peak electron density, NmF2, according to the equation NmF2-m3 = 1.24 ? 1010(foF2-MHz)2 and builds the lower limit for the maximum usable frequency MUF, is of particular interest with regard to the HF radio communication applications. In a first order approximation the ionospheric delay of transionospheric radio waves of frequency f is proportional to 1-f2 and to the integral of the electron density (total electron content - TEC) along the ray path. Thus, the information about the total electron content along the receiver-to-satellite ray path can be obtained from the dual frequency measurements permanently transmitted by GNSS satellites. As data base for our reconstruction approaches we use the vertical sounding measurements of the ionosonde stations providing foF2 and routinely generated TEC maps in SWACI (http://swaciweb.dlr.de) at DLR Neustrelitz. The basic concept of our approach is the following one: To reconstruct NmF2 maps we assimilate the ionosonde data into the global Neustrelitz F2 layer Peak electron Density Model (NPDM) by means of a successive corrections method. The TEC maps are produced by assimilating actual ground based GPS measurements providing TEC into an operational version of Neustrelitz TEC Model (NTCM). Finally, the derived NmF2 and TEC maps in

  13. Simultaneous estimation of size, radial and angular locations of a malignant tumor in a 3-D human breast - A numerical study.

    PubMed

    Das, Koushik; Mishra, Subhash C

    2015-08-01

    This article reports a numerical study pertaining to simultaneous estimation of size, radial location and angular location of a malignant tumor in a 3-D human breast. The breast skin surface temperature profile is specific to a tumor of specific size and location. The temperature profiles are always the Gaussian one, though their peak magnitudes and areas differ according to the size and location of the tumor. The temperature profiles are obtained by solving the Pennes bioheat equation using the finite element method based solver COMSOL 4.3a. With temperature profiles known, simultaneous estimation of size, radial location and angular location of the tumor is done using the curve fitting method. Effect of measurement errors is also included in the study. Estimations are accurate, and since in the inverse analysis, the curve fitting method does not require solution of the governing bioheat equation, the estimation is very fast.

  14. The 2D versus 3D imaging trade-off: The impact of over- or under-estimating small throats for simulating permeability in porous media

    NASA Astrophysics Data System (ADS)

    Peters, C. A.; Crandell, L. E.; Um, W.; Jones, K. W.; Lindquist, W. B.

    2011-12-01

    Geochemical reactions in the subsurface can alter the porosity and permeability of a porous medium through mineral precipitation and dissolution. While effects on porosity are relatively well understood, changes in permeability are more difficult to estimate. In this work, pore-network modeling is used to estimate the permeability of a porous medium using pore and throat size distributions. These distributions can be determined from 2D Scanning Electron Microscopy (SEM) images of thin sections or from 3D X-ray Computed Tomography (CT) images of small cores. Each method has unique advantages as well as unique sources of error. 3D CT imaging has the advantage of reconstructing a 3D pore network without the inherent geometry-based biases of 2D images but is limited by resolutions around 1 μm. 2D SEM imaging has the advantage of higher resolution, and the ability to examine sub-grain scale variations in porosity and mineralogy, but is limited by the small size of the sample of pores that are quantified. A pore network model was created to estimate flow permeability in a sand-packed experimental column investigating reaction of sediments with caustic radioactive tank wastes in the context of the Hanford, WA site. Before, periodically during, and after reaction, 3D images of the porous medium in the column were produced using the X2B beam line facility at the National Synchrotron Light Source (NSLS) at Brookhaven National Lab. These images were interpreted using 3DMA-Rock to characterize the pore and throat size distributions. After completion of the experiment, the column was sectioned and imaged using 2D SEM in backscattered electron mode. The 2D images were interpreted using erosion-dilation to estimate the pore and throat size distributions. A bias correction was determined by comparison with the 3D image data. A special image processing method was developed to infer the pore space before reaction by digitally removing the precipitate. The different sets of pore

  15. Estimation of local stresses and elastic properties of a mortar sample by FFT computation of fields on a 3D image

    SciTech Connect

    Escoda, J.; Willot, F.; Jeulin, D.; Sanahuja, J.; Toulemonde, C.

    2011-05-15

    This study concerns the prediction of the elastic properties of a 3D mortar image, obtained by micro-tomography, using a combined image segmentation and numerical homogenization approach. The microstructure is obtained by segmentation of the 3D image into aggregates, voids and cement paste. Full-fields computations of the elastic response of mortar are undertaken using the Fast Fourier Transform method. Emphasis is made on highly-contrasted properties between aggregates and matrix, to anticipate needs for creep or damage computation. The representative volume element, i.e. the volume size necessary to compute the effective properties with a prescribed accuracy, is given. Overall, the volumes used in this work were sufficient to estimate the effective response of mortar with a precision of 5%, 6% and 10% for contrast ratios of 100, 1000 and 10,000, resp. Finally, a statistical and local characterization of the component of the stress field parallel to the applied loading is carried out.

  16. Quantitative estimation of 3-D fiber course in gross histological sections of the human brain using polarized light.

    PubMed

    Axer, H; Axer, M; Krings, T; Keyserlingk, D G

    2001-02-15

    Series of polarized light images can be used to achieve quantitative estimates of the angles of inclination (z-direction) and direction (in xy-plane) of central nervous fibers in histological sections of the human brain. (1) The corpus callosum of a formalin-fixed human brain was sectioned at different angles of inclination of nerve fibers and at different thicknesses of the samples. The minimum, and maximum intensities, and their differences revealed a linear relationship to the angle of inclination of fibers. It was demonstrated that sections with a thickness of 80--120 microm are best suited for estimating the angle of inclination. (2) Afterwards the optic tracts of eight formalin-fixed human brains were sliced at different angles of fiber inclination at 100 microm. Measurements of intensity in 30 pixels in each section were used to calculate a linear function of calibration. The maximum intensities and the differences between maximum and minimum values measured with two polars only were best suited for estimation of fiber inclination. (3) Gross histological brain slices of formalin-fixed human brains were digitized under azimuths from 0 to 80 degrees using two polars only. These sequences were used to estimate the inclination of fibers (in z-direction). The same slices were digitized under azimuths from 0 to 160 degrees in steps of 20 degrees using a quarter wave plate additionally. These sequences were used to estimate the direction of the fibers in xy-direction. The method can be used to produce maps of fiber orientation in gross histological sections of the human brain similar to the fiber orientation maps derived by diffusion weighted magnetic resonance imaging.

  17. 3d morphometric analysis of lunar impact craters: a tool for degradation estimates and interpretation of maria stratigraphy

    NASA Astrophysics Data System (ADS)

    Vivaldi, Valerio; Massironi, Matteo; Ninfo, Andrea; Cremonese, Gabriele

    2015-04-01

    In this study we have applied 3D morphometric analysis of impact craters on the Moon by means of high resolution DTMs derived from LROC (Lunar Reconnaissance Orbiter Camera) NAC (Narrow Angle Camera) (0.5 to 1.5 m/pixel). The objective is twofold: i) evaluating crater degradation and ii) exploring the potential of this approach for Maria stratigraphic interpretation. In relation to the first objective we have considered several craters with different diameters representative of the four classes of degradation being C1 the freshest and C4 the most degraded ones (Arthur et al., 1963; Wilhelms, 1987). DTMs of these craters were elaborated according to a multiscalar approach (Wood, 1996) by testing different ranges of kernel sizes (e.g. 15-35-50-75-100), in order to retrieve morphometric variables such as slope, curvatures and openness. In particular, curvatures were calculated along different planes (e.g. profile curvature and plan curvature) and used to characterize the different sectors of a crater (rim crest, floor, internal slope and related boundaries) enabling us to evaluate its degradation. The gradient of the internal slope of different craters representative of the four classes shows a decrease of the slope mean value from C1 to C4 in relation to crater age and diameter. Indeed degradation is influenced by gravitational processes (landslides, dry flows), as well as space weathering that induces both smoothing effects on the morphologies and infilling processes within the crater, with the main results of lowering and enlarging the rim crest, and shallowing the crater depth. As far as the stratigraphic application is concerned, morphometric analysis was applied to recognize morphologic features within some simple craters, in order to understand the stratigraphic relationships among different lava layers within Mare Serenitatis. A clear-cut rheological boundary at a depth of 200 m within the small fresh Linnè crater (diameter: 2.22 km), firstly hypothesized

  18. Dental wear estimation using a digital intra-oral optical scanner and an automated 3D computer vision method.

    PubMed

    Meireles, Agnes Batista; Vieira, Antonio Wilson; Corpas, Livia; Vandenberghe, Bart; Bastos, Flavia Souza; Lambrechts, Paul; Campos, Mario Montenegro; Las Casas, Estevam Barbosa de

    2016-01-01

    The objective of this work was to propose an automated and direct process to grade tooth wear intra-orally. Eight extracted teeth were etched with acid for different times to produce wear and scanned with an intra-oral optical scanner. Computer vision algorithms were used for alignment and comparison among models. Wear volume was estimated and visual scoring was achieved to determine reliability. Results demonstrated that it is possible to directly detect submillimeter differences in teeth surfaces with an automated method with results similar to those obtained by direct visual inspection. The investigated method proved to be reliable for comparison of measurements over time.

  19. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    SciTech Connect

    Lee, J.; Yun, G. S. Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  20. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yun, G. S.; Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C.; Sabbagh, S. A.; Park, Y. S.; Lee, S. G.; Bak, J. G.

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  1. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system.

    PubMed

    Lee, J; Yun, G S; Lee, J E; Kim, M; Choi, M J; Lee, W; Park, H K; Domier, C W; Luhmann, N C; Sabbagh, S A; Park, Y S; Lee, S G; Bak, J G

    2014-06-01

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α* of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α* is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  2. Virtual forensic entomology: improving estimates of minimum post-mortem interval with 3D micro-computed tomography.

    PubMed

    Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina

    2012-07-10

    We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected.

  3. 3D dosimetry estimation for selective internal radiation therapy (SIRT) using SPECT/CT images: a phantom study

    NASA Astrophysics Data System (ADS)

    Debebe, Senait A.; Franquiz, Juan; McGoron, Anthony J.

    2015-03-01

    Selective Internal Radiation Therapy (SIRT) is a common way to treat liver cancer that cannot be treated surgically. SIRT involves administration of Yttrium - 90 (90Y) microspheres via the hepatic artery after a diagnostic procedure using 99mTechnetium (Tc)-macroaggregated albumin (MAA) to detect extrahepatic shunting to the lung or the gastrointestinal tract. Accurate quantification of radionuclide administered to patients and radiation dose absorbed by different organs is of importance in SIRT. Accurate dosimetry for SIRT allows optimization of dose delivery to the target tumor and may allow for the ability to assess the efficacy of the treatment. In this study, we proposed a method that can efficiently estimate radiation absorbed dose from 90Y bremsstrahlung SPECT/CT images of liver and the surrounding organs. Bremsstrahlung radiation from 90Y was simulated using the Compton window of 99mTc (78keV at 57%). 99mTc images acquired at the photopeak energy window were used as a standard to examine the accuracy of dosimetry prediction by the simulated bremsstrahlung images. A Liqui-Phil abdominal phantom with liver, stomach and two tumor inserts was imaged using a Philips SPECT/CT scanner. The Dose Point Kernel convolution method was used to find the radiation absorbed dose at a voxel level for a three dimensional dose distribution. This method will allow for a complete estimate of the distribution of radiation absorbed dose by tumors, liver, stomach and other surrounding organs at the voxel level. The method provides a quantitative predictive method for SIRT treatment outcome and administered dose response for patients who undergo the treatment.

  4. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load.

  5. Eye Gaze Tracking using Correlation Filters

    SciTech Connect

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S; Mahallesi, Mevlana; Kayseri, Talas

    2014-01-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  6. High-resolution 3D seismic reflection imaging across active faults and its impact on seismic hazard estimation in the Tokyo metropolitan area

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tatsuya; Sato, Hiroshi; Abe, Susumu; Kawasaki, Shinji; Kato, Naoko

    2016-10-01

    We collected and interpreted high-resolution 3D seismic reflection data across a hypothesized fault scarp, along the largest active fault that could generate hazardous earthquakes in the Tokyo metropolitan area. The processed and interpreted 3D seismic cube, linked with nearby borehole stratigraphy, suggests that a monocline that deforms lower Pleistocene units is unconformably overlain by middle Pleistocene conglomerates. Judging from structural patterns and vertical separation on the lower-middle Pleistocene units and the ground surface, the hypothesized scarp was interpreted as a terrace riser rather than as a manifestation of late Pleistocene structural growth resulting from repeated fault activity. Devastating earthquake scenarios had been predicted along the fault in question based on its proximity to the metropolitan area, however our new results lead to a significant decrease in estimated fault length and consequently in the estimated magnitude of future earthquakes associated with reactivation. This suggests a greatly reduced seismic hazard in the Tokyo metropolitan area from earthquakes generated by active intraplate crustal faults.

  7. Estimation of gas-hydrate distribution from 3-D seismic data in a small area of the Ulleung Basin, East Sea

    NASA Astrophysics Data System (ADS)

    Yi, Bo-Yeon; Kang, Nyeon-Keon; Yoo, Dong-Geun; Lee, Gwang-Hoon

    2014-05-01

    We estimated the gas-hydrate resource in a small (5 km x 5 km) area of the Ulleung Basin, East Sea from 3-D seismic and well-log data together with core measurement data, using seismic inversion and multi-attribute transform techniques. Multi-attribute transform technique finds the relationship between measured logs and the combination of the seismic attributes and various post-stack and pre-stack attributes computed from inversion. First, the gas-hydrate saturation and S-wave velocity at the wells were estimated from the simplified three-phase Biot-type equation (STPBE). The core X-ray diffraction data were used to compute the elastic properties of solid components of sediment, which are the key input parameters to the STPBE. Next, simultaneous pre-stack inversion was carried out to obtain P-wave impedance, S-wave impedance, density and lambda-mu-rho attributes. Then, the porosity and gas-hydrate saturation of 3-D seismic volume were predicted from multi-attribute transform. Finally, the gas-hydrate resource was computed by the multiplication of the porosity and gas-hydrate saturation volumes.

  8. Estimation of Effective Transmission Loss Due to Subtropical Hydrometeor Scatters using a 3D Rain Cell Model for Centimeter and Millimeter Wave Applications

    NASA Astrophysics Data System (ADS)

    Ojo, J. S.; Owolawi, P. A.

    2014-12-01

    The problem of hydrometeor scattering on microwave radio communication down links continues to be of interest as the number of the ground and earth space terminals continually grows The interference resulting from the hydrometeor scattering usually leads to the reduction in the signal-to-noise ratio ( SNR) at the affected terminal and at worst can even end up in total link outage. In this paper, an attempt has been made to compute the effective transmission loss due to subtropical hydrometeors on vertically polarized signals in Earth-satellite propagation paths in the Ku, Ka and V band frequencies based on the modified Capsoni 3D rain cell model. The 3D rain cell model has been adopted and modified using the subtropical log-normal distributions of raindrop sizes and introducing the equivalent path length through rain in the estimation of the attenuation instead of the usual specific attenuation in order to account for the attenuation of both wanted and unwanted paths to the receiver. The co-channels, interference at the same frequency is very prone to the higher amount of unwanted signal at the elevation considered. The importance of joint transmission is also considered.

  9. Three-dimensional (3D) coseismic deformation map produced by the 2014 South Napa Earthquake estimated and modeled by SAR and GPS data integration

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Albano, Matteo; Fernández, José; Palano, Mimmo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2016-04-01

    In this work we present a 3D map of coseismic displacements due to the 2014 Mw 6.0 South Napa earthquake, California, obtained by integrating displacement information data from SAR Interferometry (InSAR), Multiple Aperture Interferometry (MAI), Pixel Offset Tracking (POT) and GPS data acquired by both permanent stations and campaigns sites. This seismic event produced significant surface deformation along the 3D components causing several damages to vineyards, roads and houses. The remote sensing results, i.e. InSAR, MAI and POT, were obtained from the pair of SAR images provided by the Sentinel-1 satellite, launched on April 3rd, 2014. They were acquired on August 7th and 31st along descending orbits with an incidence angle of about 23°. The GPS dataset includes measurements from 32 stations belonging to the Bay Area Regional Deformation Network (BARDN), 301 continuous stations available from the UNAVCO and the CDDIS archives, and 13 additional campaign sites from Barnhart et al, 2014 [1]. These data constrain the horizontal and vertical displacement components proving to be helpful for the adopted integration method. We exploit the Bayes theory to search for the 3D coseismic displacement components. In particular, for each point, we construct an energy function and solve the problem to find a global minimum. Experimental results are consistent with a strike-slip fault mechanism with an approximately NW-SE fault plane. Indeed, the 3D displacement map shows a strong North-South (NS) component, peaking at about 15 cm, a few kilometers far from the epicenter. The East-West (EW) displacement component reaches its maximum (~10 cm) south of the city of Napa, whereas the vertical one (UP) is smaller, although a subsidence in the order of 8 cm on the east side of the fault can be observed. A source modelling was performed by inverting the estimated displacement components. The best fitting model is given by a ~N330° E-oriented and ~70° dipping fault with a prevailing

  10. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  11. Gaze as a biometric

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2014-01-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing different still images with different spatial relationships. Specifically, we created 5 visual dot-pattern tests to be shown on a standard computer monitor. These tests challenged the viewer s capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  12. Gaze as a biometric

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  13. Robust extrapolation scheme for fast estimation of 3D ising field partition functions: application to within-subject fMRI data analysis.

    PubMed

    Risser, Laurent; Vincent, Thomas; Ciuciu, Philippe; Idier, Jérôme

    2009-01-01

    In this paper, we present a fast numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to prespecified regions. Then, the proposed extrapolation method allows us to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible.

  14. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  15. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  16. Volume estimation of rift-related magmatic features using seismic interpretation and 3D inversion of gravity data on the Guinea Plateau, West Africa

    NASA Astrophysics Data System (ADS)

    Kardell, Dominik A.

    The two end-member concept of mantle plume-driven versus far field stress-driven continental rifting anticipates high volumes of magma emplaced close to the rift-initiating plume, whereas relatively low magmatic volumes are predicted at large distances from the plume where the rifting is thought to be driven by far field stresses. We test this concept at the Guinea Plateau, which represents the last area of separation between Africa and South America, by investigating for rift-related volumes of magmatism using borehole, 3D seismic, and gravity data to run structural 3D inversions in two different data areas. Despite our interpretation of igneous rocks spanning large areas of continental shelf covered by the available seismic surveys, the calculated volumes in the Guinea Plateau barely match the magmatic volumes of other magma-poor margins and thus endorse the aforementioned concept. While the volcanic units on the shelf seem to be characterized more dominantly by horizontally deposited extrusive volcanic flows distributed over larger areas, numerous paleo-seamounts pierce complexly deformed pre and syn-rift sedimentary units on the slope. As non-uniqueness is an omnipresent issue when using potential field data to model geologic features, our method faced some challenges in the areas exhibiting complicated geology. In this situation less rigid constraints were applied in the modeling process. The misfit issues were successfully addressed by filtering the frequency content of the gravity data according to the depth of the investigated geology. In this work, we classify and compare our volume estimates for rift-related magmatism between the Guinea Fracture Zone (FZ) and the Saint Paul's FZ while presenting the refinements applied to our modeling technique.

  17. Application of the H/V and SPAC Method to Estimate a 3D Shear Wave Velocity Model, in the City of Coatzacoalcos, Veracruz.

    NASA Astrophysics Data System (ADS)

    Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.

    2014-12-01

    Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first

  18. A 3D lower limb musculoskeletal model for simultaneous estimation of musculo-tendon, joint contact, ligament and bone forces during gait.

    PubMed

    Moissenet, Florent; Chèze, Laurence; Dumas, Raphaël

    2014-01-03

    Musculo-tendon forces and joint reaction forces are typically estimated using a two-step method, computing first the musculo-tendon forces by a static optimization procedure and then deducing the joint reaction forces from the force equilibrium. However, this method does not allow studying the interactions between musculo-tendon forces and joint reaction forces in establishing this equilibrium and the joint reaction forces are usually overestimated. This study introduces a new 3D lower limb musculoskeletal model based on a one-step static optimization procedure allowing simultaneous musculo-tendon, joint contact, ligament and bone forces estimation during gait. It is postulated that this approach, by giving access to the forces transmitted by these musculoskeletal structures at hip, tibiofemoral, patellofemoral and ankle joints, modeled using anatomically consistent kinematic models, should ease the validation of the model using joint contact forces measured with instrumented prostheses. A blinded validation based on four datasets was made under two different minimization conditions (i.e., C1 - only musculo-tendon forces are minimized, and C2 - musculo-tendon, joint contact, ligament and bone forces are minimized while focusing more specifically on tibiofemoral joint contacts). The results show that the model is able to estimate in most cases the correct timing of musculo-tendon forces during normal gait (i.e., the mean coefficient of active/inactive state concordance between estimated musculo-tendon force and measured EMG envelopes was C1: 65.87% and C2: 60.46%). The results also showed that the model is potentially able to well estimate joint contact, ligament and bone forces and more specifically medial (i.e., the mean RMSE between estimated joint contact force and in vivo measurement was C1: 1.14BW and C2: 0.39BW) and lateral (i.e., C1: 0.65BW and C2: 0.28BW) tibiofemoral contact forces during normal gait. However, the results remain highly influenced by the

  19. 3D Transient Hydraulic Tomography (3DTHT): An Efficient Field and Modeling Method for High-Resolution Estimation of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.

    2012-12-01

    The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3

  20. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  1. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  2. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  3. Different scenarios for inverse estimation of soil hydraulic parameters from double-ring infiltrometer data using HYDRUS-2D/3D

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Parisa; Ghorbani-Dashtaki, Shoja; Mosaddeghi, Mohammad Reza; Shirani, Hossein; Nodoushan, Ali Reza Mohammadi

    2016-04-01

    In this study, HYDRUS-2D/3D was used to simulate ponded infiltration through double-ring infiltrometers into a hypothetical loamy soil profile. Twelve scenarios of inverse modelling (divided into three groups) were considered for estimation of Mualem-van Genuchten hydraulic parameters. In the first group, simulation was carried out solely using cumulative infiltration data. In the second group, cumulative infiltration data plus water content at h = -330 cm (field capacity) were used as inputs. In the third group, cumulative infiltration data plus water contents at h = -330 cm (field capacity) and h = -15 000 cm (permanent wilting point) were used simultaneously as predictors. The results showed that numerical inverse modelling of the double-ring infiltrometer data provided a reliable alternative method for determining soil hydraulic parameters. The results also indicated that by reducing the number of hydraulic parameters involved in the optimization process, the simulation error is reduced. The best one in infiltration simulation which parameters α, n, and Ks were optimized using the infiltration data and field capacity as inputs. Including field capacity as additional data was important for better optimization/definition of soil hydraulic functions, but using field capacity and permanent wilting point simultaneously as additional data increased the simulation error.

  4. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  5. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  6. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  7. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  8. The yearly rate of Relative Thalamic Atrophy (yrRTA): a simple 2D/3D method for estimating deep gray matter atrophy in Multiple Sclerosis

    PubMed Central

    Menéndez-González, Manuel; Salas-Pacheco, José M.; Arias-Carrión, Oscar

    2014-01-01

    Despite a strong correlation to outcome, the measurement of gray matter (GM) atrophy is not being used in daily clinical practice as a prognostic factor and monitor the effect of treatments in Multiple Sclerosis (MS). This is mainly because the volumetric methods available to date are sophisticated and difficult to implement for routine use in most hospitals. In addition, the meanings of raw results from volumetric studies on regions of interest are not always easy to understand. Thus, there is a huge need of a methodology suitable to be applied in daily clinical practice in order to estimate GM atrophy in a convenient and comprehensive way. Given the thalamus is the brain structure found to be more consistently implied in MS both in terms of extent of atrophy and in terms of prognostic value, we propose a solution based in this structure. In particular, we propose to compare the extent of thalamus atrophy with the extent of unspecific, global brain atrophy, represented by ventricular enlargement. We name this ratio the “yearly rate of Relative Thalamic Atrophy” (yrRTA). In this report we aim to describe the concept of yrRTA and the guidelines for computing it under 2D and 3D approaches and explain the rationale behind this method. We have also conducted a very short crossectional retrospective study to proof the concept of yrRTA. However, we do not seek to describe here the validity of this parameter since these researches are being conducted currently and results will be addressed in future publications. PMID:25206331

  9. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  10. The Expressive Gaze Model: Using Gaze to Express Emotion

    DTIC Science & Technology

    2010-07-01

    and integrate them with the GWT- based movement. Eye movement by itself is highly stereotypical and contains little emotional content. It also func...highly stereotyped eye movements toward a specific target, and ■ vestibular and optokinetic movements, which maintain visual fixation during head...keyframes, we aligned that shift with a “ stereotypical ” gaze shift. We found the stereo- typical gaze shift by averaging 33 emotionally neutral gaze shifts

  11. Saliency-based gaze prediction based on head direction.

    PubMed

    Nakashima, Ryoichi; Fang, Yu; Hatori, Yasuhiro; Hiratani, Akinori; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2015-12-01

    Despite decades of attempts to create a model for predicting gaze locations by using saliency maps, a highly accurate gaze prediction model for general conditions has yet to be devised. In this study, we propose a gaze prediction method based on head direction that can improve the accuracy of any model. We used a probability distribution of eye position based on head direction (static eye-head coordination) and added this information to a model of saliency-based visual attention. Using empirical data on eye and head directions while observers were viewing natural scenes, we estimated a probability distribution of eye position. We then combined the relationship between eye position and head direction with visual saliency to predict gaze locations. The model showed that information on head direction improved the prediction accuracy. Further, there was no difference in the gaze prediction accuracy between the two models using information on head direction with and without eye-head coordination. Therefore, information on head direction is useful for predicting gaze location when it is available. Furthermore, this gaze prediction model can be applied relatively easily to many daily situations such as during walking.

  12. Why does gaze enhance mimicry? Placing gaze-mimicry effects in relation to other gaze phenomena.

    PubMed

    Wang, Yin; Hamilton, Antonia F de C

    2014-01-01

    Eye gaze is a powerful signal, which exerts a mixture of arousal, attentional, and social effects on the observer. We recently found a behavioural interaction between eye contact and mimicry where direct gaze rapidly enhanced mimicry of hand movements ). Here, we report two detailed investigations of this effect. In Experiment 1, we compared the effects of "direct gaze", "averted gaze", and "gaze to the acting hand" on mimicry and manipulated the sequence of gaze events within a trial. Only direct gaze immediately before the hand action enhanced mimicry. In Experiment 2, we examined the enhancement of mimicry when direct gaze is followed by a "blink" or by "shut eyes", or by "occluded eyes". Enhanced mimicry relative to baseline was seen only in the blink condition. Together, these results suggest that ongoing social engagement is necessary for enhanced mimicry. These findings allow us to place the gaze-enhancement effect in the context of other reported gaze phenomena. We suggest that this effect is similar to previously reported audience effects, but is less similar to ostensive cueing effects. This has important implications for our theories of the relationships between social cues and imitation.

  13. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  14. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  15. Estimating a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head from a commercial OCT device

    NASA Astrophysics Data System (ADS)

    Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.

    2016-03-01

    The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.

  16. Spatial distribution of hydrocarbon reservoirs in the West Korea Bay Basin in the northern part of the Yellow Sea, estimated by 3-D gravity forward modelling

    NASA Astrophysics Data System (ADS)

    Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.

    2017-01-01

    Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite-derived potential field data can be alternatively used to image the 3-D density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3-D forward density modelling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3-D forward modelling showed that there are two modelled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg m-3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.

  17. Spatial distribution of Hydrocarbon Reservoirs in the West Korea Bay Basin in the northern part of the Yellow Sea, estimated by 3D gravity forward modeling

    NASA Astrophysics Data System (ADS)

    Choi, Sungchan; Ryu, In-Chang; Götze, H.-J.; Chae, Y.

    2016-10-01

    Although an amount of hydrocarbon has been discovered in the West Korea Bay Basin (WKBB), located in the North Korean offshore area, geophysical investigations associated with these hydrocarbon reservoirs are not permitted because of the current geopolitical situation. Interpretation of satellite- derived potential field data can be alternatively used to image the three-dimensional (3D) density distribution in the sedimentary basin associated with hydrocarbon deposits. We interpreted the TRIDENT satellite-derived gravity field data to provide detailed insights into the spatial distribution of sedimentary density structures in the WKBB. We used 3D forward density modeling for the interpretation that incorporated constraints from existing geological and geophysical information. The gravity data interpretation and the 3D forward modeling showed that there are two modeled areas in the central subbasin that are characterized by very low density structures, with a maximum density of about 2000 kg/m3, indicating some type of hydrocarbon reservoir. One of the anticipated hydrocarbon reservoirs is located in the southern part of the central subbasin with a volume of about 250 km3 at a depth of about 3000 m in the Cretaceous/Jurassic layer. The other hydrocarbon reservoir should exist in the northern part of the central subbasin, with an average volume of about 300 km3 at a depth of about 2500 m.

  18. An estimate of the PH3, CH3D, and GeH4 abundances on Jupiter from the Voyager IRIS data at 4.5 microns

    NASA Technical Reports Server (NTRS)

    Drossart, P.; Encrenaz, T.; Combes, M.; Kunde, V.; Hanel, R.

    1982-01-01

    No evidence is found for large scale phosphine abundance variations over Jovian latitudes between -30 and +30 deg, in PH3, CH3D, and GeH4 abundances derived from the 2100-2250/cm region of the Voyager 1 IRIS spectra. The PH3/H2 value of (4.5 + or - 1.5) X 10 to the -7th derived from atmospheric regions corresponding to 170-200 K is 0.75 + or - 0.25 times the solar value, and suggests that the PH3/H2 ratio on Jupiter decreases with atmospheric pressure upon comparison with other PH3 determinations at 10 microns. In the 200-250 K region, CH3D/H2 and GeH4/H2 ratios of 2.0 X 10 to the -7th and 1.0 X 10 to the -9th, respectively, are derived within a factor of 2.0. Assuming a C/H value of 0.001, as derived from Voyager, the CH3D/H2 ratio obtained in this study implies a D/H ratio of 0.000018. This is in agreement with the interstellar medium value.

  19. "Beloved" as an Oppositional Gaze

    ERIC Educational Resources Information Center

    Mao, Weiqiang; Zhang, Mingquan

    2009-01-01

    This paper studies the strategy Morrison adopts in "Beloved" to give voice to black Americans long silenced by the dominant white American culture. Instead of being objects passively accepting their aphasia, black Americans become speaking subjects that are able to cast an oppositional gaze to avert the objectifying gaze of white…

  20. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    PubMed Central

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  1. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  2. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  3. Getting a Grip on Social Gaze: Control over Others' Gaze Helps Gaze Detection in High-Functioning Autism

    ERIC Educational Resources Information Center

    Dratsch, Thomas; Schwartz, Caroline; Yanev, Kliment; Schilbach, Leonhard; Vogeley, Kai; Bente, Gary

    2013-01-01

    We investigated the influence of control over a social stimulus on the ability to detect direct gaze in high-functioning autism (HFA). In a pilot study, 19 participants with and 19 without HFA were compared on a gaze detection and a gaze setting task. Participants with HFA were less accurate in detecting direct gaze in the detection task, but did…

  4. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  5. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  6. Eye-head coordination during large gaze shifts.

    PubMed

    Tweed, D; Glenn, B; Vilis, T

    1995-02-01

    1. Three-dimensional (3D) eye and head rotations were measured with the use of the magnetic search coil technique in six healthy human subjects as they made large gaze shifts. The aims of this study were 1) to see whether the kinematic rules that constrain eye and head orientations to two degrees of freedom between saccades also hold during movements; 2) to chart the curvature and looping in eye and head trajectories; and 3) to assess whether the timing and paths of eye and head movements are more compatible with a single gaze error command driving both movements, or with two different feedback loops. 2. Static orientations of the eye and head relative to space are known to resemble the distribution that would be generated by a Fick gimbal (a horizontal axis moving on a fixed vertical axis). We show that gaze point trajectories during eye-head gaze shifts fit the Fick gimbal pattern, with horizontal movements following straight "line of latitude" paths and vertical movements curving like lines of longitude. However, horizontal (and to a lesser extent vertical) movements showed direction-dependent looping, with rightward and leftward (and up and down) saccades tracing slightly different paths. Plots of facing direction (the analogue of gaze direction for the head) also showed the latitude/longitude pattern, without looping. In radial saccades, the gaze point initially moved more vertically than the target direction and then curved; head trajectories were straight. 3. The eye and head components of randomly sequenced gaze shifts were not time locked to one another. The head could start moving at any time from slightly before the eye until 200 ms after, and the standard deviation of this interval could be as large as 80 ms. The head continued moving for a long (up to 400 ms) and highly variable time after the gaze error had fallen to zero. For repeated saccades between the same targets, peak eye and head velocities were directly, but very weakly, correlated; fast eye

  7. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  8. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  9. Estimation of three-dimensional knee joint movement using bi-plane x-ray fluoroscopy and 3D-CT

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Fujita, Satoshi; Kohno, Takahiro; Suzuki, Masahiko; Miyagi, Jin; Moriya, Hideshige

    2005-04-01

    Acquisition of exact information of three-dimensional knee joint movement is desired in plastic surgery. Conventional X-ray fluoroscopy provides dynamic but just two-dimensional projected image. On the other hand, three-dimensional CT provides three-dimensional but just static image. In this paper, a method for acquiring three-dimensional knee joint movement using both bi-plane, dynamic X-ray fluoroscopy and static three-dimensional CT is proposed. Basic idea is use of 2D/3D registration using digitally reconstructed radiograph (DRR) or virtual projection of CT data. Original ideal is not new but the application of bi-plane fluoroscopy to natural bones of knee is reported for the first time. The technique was applied to two volunteers and successful results were obtained. Accuracy evaluation through computer simulation and phantom experiment with a knee joint of a pig were also conducted.

  10. Integrating eye tracking and motion sensor on mobile phone for interactive 3D display

    NASA Astrophysics Data System (ADS)

    Sun, Yu-Wei; Chiang, Chen-Kuo; Lai, Shang-Hong

    2013-09-01

    In this paper, we propose an eye tracking and gaze estimation system for mobile phone. We integrate an eye detector, cornereye center and iso-center to improve pupil detection. The optical flow information is used for eye tracking. We develop a robust eye tracking system that integrates eye detection and optical-flow based image tracking. In addition, we further incorporate the orientation sensor information from the mobile phone to improve the eye tracking for accurate gaze estimation. We demonstrate the accuracy of the proposed eye tracking and gaze estimation system through experiments on some public video sequences as well as videos acquired directly from mobile phone.

  11. Geodesy-based estimates of loading rates on faults beneath the Los Angeles basin with a new, computationally efficient method to model dislocations in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Rollins, C.; Argus, D. F.; Avouac, J. P.; Landry, W.; Barbot, S.

    2015-12-01

    North-south compression across the Los Angeles basin is accommodated by slip on thrust faults beneath the basin that may present significant seismic hazard to Los Angeles. Previous geodesy-based efforts to constrain the distributions and rates of elastic strain accumulation on these faults [Argus et al 2005, 2012] have found that the elastic model used has a first-order impact on the inferred distribution of locking and creep, underlining the need to accurately incorporate the laterally heterogeneous elastic structure and complex fault geometries of the Los Angeles basin into this analysis. We are using Gamra [Landry and Barbot, in prep.], a newly developed adaptive-meshing finite-difference solver, to compute elastostatic Green's functions that incorporate the full 3D regional elastic structure provided by the SCEC Community Velocity Model. Among preliminary results from benchmarks, forward models and inversions, we find that: 1) for a modeled creep source on the edge dislocation geometry from Argus et al [2005], the use of the SCEC CVM material model produces surface velocities in the hanging wall that are up to ~50% faster than those predicted in an elastic halfspace model; 2) in sensitivity-modulated inversions of the Argus et al [2005] GPS velocity field for slip on the same dislocation source, the use of the CVM deepens the inferred locking depth by ≥3 km compared to an elastic halfspace model; 3) when using finite-difference or finite-element models with Dirichlet boundary conditions (except for the free surface) for problems of this scale, it is necessary to set the boundaries at least ~100 km away from any slip source or data point to guarantee convergence within 5% of analytical solutions (a result which may be applicable to other static dislocation modeling problems and which may scale with the size of the area of interest). Here we will present finalized results from inversions of an updated GPS velocity field [Argus et al, AGU 2015] for the inferred

  12. Twin Peaks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The two hills in the distance, approximately one to two kilometers away, have been dubbed the 'Twin Peaks' and are of great interest to Pathfinder scientists as objects of future study. 3D glasses are necessary to identify surface detail. The white areas on the left hill, called the 'Ski Run' by scientists, may have been formed by hydrologic processes.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  13. 3D and beyond

    NASA Astrophysics Data System (ADS)

    Fung, Y. C.

    1995-05-01

    This conference on physiology and function covers a wide range of subjects, including the vasculature and blood flow, the flow of gas, water, and blood in the lung, the neurological structure and function, the modeling, and the motion and mechanics of organs. Many technologies are discussed. I believe that the list would include a robotic photographer, to hold the optical equipment in a precisely controlled way to obtain the images for the user. Why are 3D images needed? They are to achieve certain objectives through measurements of some objects. For example, in order to improve performance in sports or beauty of a person, we measure the form, dimensions, appearance, and movements.

  14. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  15. Stochastic estimation of biogeochemical parameters from Globcolour ocean colour satellite data in a North Atlantic 3D ocean coupled physical-biogeochemical model

    NASA Astrophysics Data System (ADS)

    Doron, Maéva; Brasseur, Pierre; Brankart, Jean-Michel; Losa, Svetlana N.; Melet, Angélique

    2013-05-01

    Biogeochemical parameters remain a major source of uncertainty in coupled physical-biogeochemical models of the ocean. In a previous study (Doron et al., 2011), a stochastic estimation method was developed to estimate a subset of biogeochemical model parameters from surface phytoplankton observations. The concept was tested in the context of idealised twin experiments performed with a 1/4° resolution model of the North Atlantic ocean. The method was based on ensemble simulations describing the model response to parameter uncertainty. The statistical estimation process relies on nonlinear transformations of the estimated space to cope with the non-Gaussian behaviour of the resulting joint probability distribution of the model state variables and parameters. In the present study, the same method is applied to real ocean colour observations, as delivered by the sensors SeaWiFS, MERIS and MODIS embarked on the satellites OrbView-2, Envisat and Aqua respectively. The main outcome of the present experiments is a set of regionalised biogeochemical parameters. The benefit is quantitatively assessed with an objective norm of the misfits, which automatically adapts to the different ecological regions. The chlorophyll concentration simulated by the model with this set of optimally derived parameters is closer to the observations than the reference simulation using uniform values of the parameters. In addition, the interannual and seasonal robustness of the estimated parameters is tested by repeating the same analysis using ocean colour observations from several months and several years. The results show the overall consistency of the ensemble of estimated parameters, which are also compared to the results of an independent study.

  16. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  17. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    An area of rocky terrain near the landing site of the Sagan Memorial Station can be seen in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  18. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for 99mTc-hynic-Tyr3-octreotide Imaging

    PubMed Central

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of 99mTc-hydrazinonicotinamide (hynic)-Tyr3-octreotide as a SPECT radiotracer. 99mTc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of 99mhynic-Tyr3-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  19. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    PubMed

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.

  20. In vitro quantification of the performance of model-based mono-planar and bi-planar fluoroscopy for 3D joint kinematics estimation.

    PubMed

    Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita

    2013-03-01

    Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.

  1. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the

  2. A Lower Bound on Blowup Rates for the 3D Incompressible Euler Equation and a Single Exponential Beale-Kato-Majda Type Estimate

    NASA Astrophysics Data System (ADS)

    Chen, Thomas; Pavlović, Nataša

    2012-08-01

    We prove a Beale-Kato-Majda type criterion for the loss of regularity for solutions of the incompressible Euler equations in {Hs({R}^3)} , for {s>5/2} . Instead of double exponential estimates of Beale-Kato-Majda type, we obtain a single exponential bound on {|u(t)|_{H^s}} involving the length parameter introduced by Constantin in (SIAM Rev. 36(1):73-98, 1994). In particular, we derive lower bounds on the blowup rate of such solutions.

  3. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  4. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  5. A Computational Framework for Age-at-Death Estimation from the Skeleton: Surface and Outline Analysis of 3D Laser Scans of the Adult Pubic Symphysis.

    PubMed

    Stoyanova, Detelina K; Algee-Hewitt, Bridget F B; Kim, Jieun; Slice, Dennis E

    2017-02-28

    In forensic anthropology, age-at-death estimation typically requires the macroscopic assessment of the skeletal indicator and its association with a phase or score. High subjectivity and error are the recognized disadvantages of this approach, creating a need for alternative tools that enable the objective and mathematically robust assessment of true chronological age. We describe, here, three fully computational, quantitative shape analysis methods and a combinatory approach that make use of three-dimensional laser scans of the pubic symphysis. We report a novel age-related shape measure, focusing on the changes observed in the ventral margin curvature, and refine two former methods, whose measures capture the flatness of the symphyseal surface. We show how we can decrease age-estimation error and improve prior results by combining these outline and surface measures in two multivariate regression models. The presented models produce objective age-estimates that are comparable to current practices with root-mean-square-errors between 13.7 and 16.5 years.

  6. A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus)

    PubMed Central

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  7. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  8. An automated image-based method of 3D subject-specific body segment parameter estimation for kinetic analyses of rapid movements.

    PubMed

    Sheets, Alison L; Corazza, Stefano; Andriacchi, Thomas P

    2010-01-01

    Accurate subject-specific body segment parameters (BSPs) are necessary to perform kinetic analyses of human movements with large accelerations, or no external contact forces or moments. A new automated topographical image-based method of estimating segment mass, center of mass (CM) position, and moments of inertia is presented. Body geometry and volume were measured using a laser scanner, then an automated pose and shape registration algorithm segmented the scanned body surface, and identified joint center (JC) positions. Assuming the constant segment densities of Dempster, thigh and shank masses, CM locations, and moments of inertia were estimated for four male subjects with body mass indexes (BMIs) of 19.7-38.2. The subject-specific BSP were compared with those determined using Dempster and Clauser regression equations. The influence of BSP and BMI differences on knee and hip net forces and moments during a running swing phase were quantified for the subjects with the smallest and largest BMIs. Subject-specific BSP for 15 body segments were quickly calculated using the image-based method, and total subject masses were overestimated by 1.7-2.9%.When compared with the Dempster and Clauser methods, image-based and regression estimated thigh BSP varied more than the shank parameters. Thigh masses and hip JC to thigh CM distances were consistently larger, and each transverse moment of inertia was smaller using the image-based method. Because the shank had larger linear and angular accelerations than the thigh during the running swing phase, shank BSP differences had a larger effect on calculated intersegmental forces and moments at the knee joint than thigh BSP differences did at the hip. It was the net knee kinetic differences caused by the shank BSP differences that were the largest contributors to the hip variations. Finally, BSP differences produced larger kinetic differences for the subject with larger segment masses, suggesting that parameter accuracy is more

  9. An initial study on the estimation of time-varying volumetric treatment images and 3D tumor localization from single MV cine EPID images

    SciTech Connect

    Mishra, Pankaj Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.; Li, Ruijiang

    2014-08-15

    Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model

  10. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 13.1

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  17. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  18. Estimation of the maximum allowable loading amount of COD in Luoyuan Bay by a 3-D COD transport and transformation model

    NASA Astrophysics Data System (ADS)

    Wu, Jialin; Li, Keqiang; Shi, Xiaoyong; Liang, Shengkang; Han, Xiurong; Ma, Qimin; Wang, Xiulin

    2014-08-01

    The rapid economic and social developments in the Luoyuan and Lianjiang counties of Fujian Province, China, raise certain environment and ecosystem issues. The unusual phytoplankton bloom and eutrophication, for example, have increased in severity in Luoyuan Bay (LB). The constant increase of nutrient loads has largely caused the environmental degradation in LB. Several countermeasures have been implemented to solve these environmental problems. The most effective of these strategies is the reduction of pollutant loadings into the sea in accordance with total pollutant load control (TPLC) plans. A combined three-dimensional hydrodynamic transport-transformation model was constructed to estimate the marine environmental capacity of chemical oxygen demand (COD). The allowed maximum loadings for each discharge unit in LB were calculated with applicable simulation results. The simulation results indicated that the environmental capacity of COD is approximately 11×104 t year-1 when the water quality complies with the marine functional zoning standards for LB. A pollutant reduction scheme to diminish the present levels of mariculture- and domestic-based COD loadings is based on the estimated marine COD environmental capacity. The obtained values imply that the LB waters could comply with the targeted water quality criteria. To meet the revised marine functional zoning standards, discharge loadings from discharge units 1 and 11 should be reduced to 996 and 3236 t year-1, respectively.

  19. To Gaze or Not to Gaze: Visual Communication in Eastern Zaire. Sociolinguistic Working Paper Number 87.

    ERIC Educational Resources Information Center

    Blakely, Thomas D.

    The nature of gazing at someone or something, as a form of communication among the Bahemba people in eastern Zaire, is analyzed across a range of situations. Variations of steady gazing, a common eye contact routine, are outlined, including: (1) negative non-gazing or glance routines, especially in situations in which gazing would ordinarily…

  20. Smoothing 3-D Data for Torpedo Paths

    DTIC Science & Technology

    1978-05-01

    parametric estimation , consider data collected at 200 sequential observation times (e.g., 800 to 1,000 for the 3-D data used in this section). Samples...sample when the fitted curve is a straight line (refer to Appendix B). (e) Parametric estimation could also be modified to delete some samples (e.g

  1. Intraoral 3D scanner

    NASA Astrophysics Data System (ADS)

    Kühmstedt, Peter; Bräuer-Burchardt, Christian; Munkelt, Christoph; Heinze, Matthias; Palme, Martin; Schmidt, Ingo; Hintersehr, Josef; Notni, Gunther

    2007-09-01

    Here a new set-up of a 3D-scanning system for CAD/CAM in dental industry is proposed. The system is designed for direct scanning of the dental preparations within the mouth. The measuring process is based on phase correlation technique in combination with fast fringe projection in a stereo arrangement. The novelty in the approach is characterized by the following features: A phase correlation between the phase values of the images of two cameras is used for the co-ordinate calculation. This works contrary to the usage of only phase values (phasogrammetry) or classical triangulation (phase values and camera image co-ordinate values) for the determination of the co-ordinates. The main advantage of the method is that the absolute value of the phase at each point does not directly determine the coordinate. Thus errors in the determination of the co-ordinates are prevented. Furthermore, using the epipolar geometry of the stereo-like arrangement the phase unwrapping problem of fringe analysis can be solved. The endoscope like measurement system contains one projection and two camera channels for illumination and observation of the object, respectively. The new system has a measurement field of nearly 25mm × 15mm. The user can measure two or three teeth at one time. So the system can by used for scanning of single tooth up to bridges preparations. In the paper the first realization of the intraoral scanner is described.

  2. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  3. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Kutzer, M. D.; Taylor, R. H.; Armand, M.

    2014-03-01

    Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator's attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2-2.6) and out-of-plane angulations (0°-90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, +/-10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°-60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a

  5. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  6. Estimates of mercury flux into the United States from non-local and global sources: results from a 3-D CTM simulation

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Kotamarthi, V. R.; Streets, D.; Kim, M.; Crist, K.

    2008-11-01

    The sensitivity of Hg concentration and deposition in the United States to emissions in China was investigated by using a global chemical transport model: Model for Ozone and Related Chemical Tracers (MOZART). Two forms of gaseous Hg were included in the model: elemental Hg (HG(0) and oxidized or reactive Hg (HGO). We simulated three different emission scenarios to evaluate the model's sensitivity. One scenario included no emissions from China, while the others were based on different estimates of Hg emissions in China. The results indicated, in general, that when Hg emissions were included, HG(0) concentrations increased both locally and globally. Increases in Hg concentrations in the United States were greatest during spring and summer, by as much as 7%. Ratios of calculated concentrations of Hg and CO near the source region in eastern Asia agreed well with ratios based on measurements. Increases similar to those observed for HG(0) were also calculated for deposition of HGO. Calculated increases in wet and dry deposition in the United States were 5 7% and 5 9%, respectively. The results indicate that long-range transcontinental transport of Hg has a non-negligible impact on Hg deposition levels in the United States.

  7. Teachers' Responses to Children's Eye Gaze

    ERIC Educational Resources Information Center

    Doherty-Sneddon, Gwyneth; Phelps, Fiona G.

    2007-01-01

    When asked questions, children often avert their gaze. Furthermore, the frequency of such gaze aversion (GA) is related to the difficulty of cognitive processing, suggesting that GA is a good indicator of children's thinking and comprehension. However, little is known about how teachers detect and interpret such gaze signals. In Study 1 teaching…

  8. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  9. Gaze Direction Detection in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  10. Orienting to Eye Gaze and Face Processing

    ERIC Educational Resources Information Center

    Tipples, Jason

    2005-01-01

    The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…

  11. A joint data assimilation system (Tan-Tracker) to simultaneously estimate surface CO2 fluxes and 3-D atmospheric CO2 concentrations from observations

    NASA Astrophysics Data System (ADS)

    Tian, X.; Xie, Z.; Liu, Y.; Cai, Z.; Fu, Y.; Zhang, H.; Feng, L.

    2013-09-01

    To quantitatively estimate CO2 surface fluxes (CFs) from atmospheric observations, a joint data assimilation system ("Tan-Tracker") is developed by incorporating a joint data assimilation framework into the GEOS-Chem atmospheric transport model. In Tan-Tracker, we choose an identity operator as the CF dynamical model to describe the CFs' evolution, which constitutes an augmented dynamical model together with the GEOS-Chem atmospheric transport model. In this case, the large-scale vector made up of CFs and CO2 concentrations is taken as the prognostic variable for the augmented dynamical model. And thus both CO2 concentrations and CFs are jointly assimilated by using the atmospheric observations (e.g., the in-situ observations or satellite measurements). In contrast, in the traditional joint data assimilation frameworks, CFs are usually treated as the model parameters and form a state-parameter augmented vector jointly with CO2 concentrations. The absence of a CF dynamical model will certainly result in a large waste of observed information since any useful information for CFs' improvement achieved by the current data assimilation procedure could not be used in the next assimilation cycle. Observing system simulation experiments (OSSEs) are carefully designed to evaluate the Tan-Tracker system in comparison to its simplified version (referred to as TT-S) with only CFs taken as the prognostic variables. It is found that our Tan-Tracker system is capable of outperforming TT-S with higher assimilation precision for both CO2 concentrations and CO2 fluxes, mainly due to the simultaneous assimilation of CO2 concentrations and CFs in our Tan-Tracker data assimilation system.

  12. Estimating the relationship between urban 3D morphology and land surface temperature using airborne LiDAR and Landsat-8 Thermal Infrared Sensor data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.

    2015-12-01

    Urban forests are known for mitigating the urban heat island effect and heat-related health issues by reducing air and surface temperature. Beyond the amount of the canopy area, however, little is known what kind of spatial patterns and structures of urban forests best contributes to reducing temperatures and mitigating the urban heat effects. Previous studies attempted to find the relationship between the land surface temperature and various indicators of vegetation abundance using remote sensed data but the majority of those studies relied on two dimensional area based metrics, such as tree canopy cover, impervious surface area, and Normalized Differential Vegetation Index, etc. This study investigates the relationship between the three-dimensional spatial structure of urban forests and urban surface temperature focusing on vertical variance. We use a Landsat-8 Thermal Infrared Sensor image (acquired on July 24, 2014) to estimate the land surface temperature of the City of Sacramento, CA. We extract the height and volume of urban features (both vegetation and non-vegetation) using airborne LiDAR (Light Detection and Ranging) and high spatial resolution aerial imagery. Using regression analysis, we apply empirical approach to find the relationship between the land surface temperature and different sets of variables, which describe spatial patterns and structures of various urban features including trees. Our analysis demonstrates that incorporating vertical variance parameters improve the accuracy of the model. The results of the study suggest urban tree planting is an effective and viable solution to mitigate urban heat by increasing the variance of urban surface as well as evaporative cooling effect.

  13. Auto convergence for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  14. Auto-focusing method for remote gaze tracking camera

    NASA Astrophysics Data System (ADS)

    Lee, Won Oh; Lee, Hyeon Chang; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2012-06-01

    Gaze tracking determines what a user is looking at; the key challenge is to obtain well-focused eye images. This is not easy because the human eye is very small, whereas the required resolution of the image should be large enough for accurate detection of the pupil center. In addition, capturing a user's eye image by a remote gaze tracking system within a large working volume at a long Z distance requires a panning/tilting mechanism with a zoom lens, which makes it more difficult to acquire focused eye images. To solve this problem, a new auto-focusing method for remote gaze tracking is proposed. The proposed approach is novel in the following four ways: First, it is the first research on an auto-focusing method for a remote gaze tracking system. Second by using user-dependent calibration at initial stage, the weakness of the previous methods that use facial width in captured image to estimate Z distance between a user and camera, wherein each person has the individual variation of facial width, is solved. Third, the parameters of the modeled formula for estimating the Z distance are adaptively updated using the least squares regression method. Therefore, the focus becomes more accurate over time. Fourth, the relationship between the parameters and the face width is fitted locally according to the Z distance instead of by global fitting, which can enhance the accuracy of Z distance estimation. The results of an experiment with 10,000 images of 10 persons showed that the mean absolute error between the ground-truth Z distance measured by a Polhemus Patriot device and that estimated by the proposed method was 4.84 cm. A total of 95.61% of the images obtained by the proposed method were focused and could be used for gaze detection.

  15. Design of monocular multiview stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2001-06-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have developed a 3D HMD system using the monocular stereoscopic display. This paper shows that the 3D vision system using the monocular stereoscopic display and capturing camera builds a 3D virtual space for a telemanipulation using a captured real 3D image. In this paper, we propose the monocular stereoscopic 3D display and capturing camera for a tele- manipulation system. In addition, we describe the result of depth estimation using the multi-focus retinal images.

  16. Attentional capture by change in direct gaze.

    PubMed

    Yokoyama, Takemasa; Ishibashi, Kazuya; Hongoh, Yuki; Kita, Shinichi

    2011-01-01

    In three experiments, we examined whether change in direct gaze was better at capturing visuospatial attention than non-direct gaze change. Also, change in direct gaze can be categorised into two types: 'look toward', which means gaze changing to look toward observers, and 'look away', which means gaze changing to look away from observers. Thus, we also investigated which type of change in direct gaze was more effective in capturing visuospatial attention. Each experiment employed a change-detection task, and we compared detection accuracy between 'look away', 'look toward', and non-direct gaze-change conditions. In experiment 1, we found detection advantage for change in direct gaze relative to non-direct gaze change, and for 'look toward' compared with 'look away'. In experiment 2, we conducted control experiments to exclude possibilities of simple motion detection and geometrical factors of eyes, and confirm detection advantage in experiment 1 only occurred when the stimuli were processed as faces and gazes. In experiment 3, we manipulated the head deviation, but the results in experiment 1 persisted despite changes in head orientation. The findings establish that individuals are sensitive to change in direct relative to non-direct gaze change, and 'look toward' compared with 'look away'.

  17. SB3D User Manual, Santa Barbara 3D Radiative Transfer Model

    SciTech Connect

    O'Hirok, William

    1999-01-01

    SB3D is a three-dimensional atmospheric and oceanic radiative transfer model for the Solar spectrum. The microphysics employed in the model are the same as used in the model SBDART. It is assumed that the user of SB3D is familiar with SBDART and IDL. SB3D differs from SBDART in that computations are conducted on media in three-dimensions rather than a single column (i.e. plane-parallel), and a stochastic method (Monte Carlo) is employed instead of a numerical approach (Discrete Ordinates) for estimating a solution to the radiative transfer equation. Because of these two differences between SB3D and SBDART, the input and running of SB3D is more unwieldy and requires compromises between model performance and computational expense. Hence, there is no one correct method for running the model and the user must develop a sense to the proper input and configuration of the model.

  18. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  19. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  20. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  1. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  2. Infants understand the referential nature of human gaze but not robot gaze.

    PubMed

    Okumura, Yuko; Kanakogi, Yasuhiro; Kanda, Takayuki; Ishiguro, Hiroshi; Itakura, Shoji

    2013-09-01

    Infants can acquire much information by following the gaze direction of others. This type of social learning is underpinned by the ability to understand the relationship between gaze direction and a referent object (i.e., the referential nature of gaze). However, it is unknown whether human gaze is a privileged cue for information that infants use. Comparing human gaze with nonhuman (robot) gaze, we investigated whether infants' understanding of the referential nature of looking is restricted to human gaze. In the current study, we developed a novel task that measured by eye-tracking infants' anticipation of an object from observing an agent's gaze shift. Results revealed that although 10- and 12-month-olds followed the gaze direction of both a human and a robot, only 12-month-olds predicted the appearance of objects from referential gaze information when the agent was the human. Such a prediction for objects reflects an understanding of referential gaze. Our study demonstrates that by 12 months of age, infants hold referential expectations specifically from the gaze shift of humans. These specific expectations from human gaze may enable infants to acquire various information that others convey in social learning and social interaction.

  3. SACR ADVance 3-D Cartesian Cloud Cover (SACR-ADV-3D3C) product

    DOE Data Explorer

    Meng Wang, Tami Toto, Eugene Clothiaux, Katia Lamer, Mariko Oue

    2017-03-08

    SACR-ADV-3D3C remaps the outputs of SACRCORR for cross-wind range-height indicator (CW-RHI) scans to a Cartesian grid and reports reflectivity CFAD and best estimate domain averaged cloud fraction. The final output is a single NetCDF file containing all aforementioned corrected radar moments remapped on a 3-D Cartesian grid, the SACR reflectivity CFAD, a profile of best estimate cloud fraction, a profile of maximum observable x-domain size (xmax), a profile time to horizontal distance estimate and a profile of minimum observable reflectivity (dBZmin).

  4. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  5. 3D Buckligami: Digital Matter

    NASA Astrophysics Data System (ADS)

    van Hecke, Martin; de Reus, Koen; Florijn, Bastiaan; Coulais, Corentin

    2014-03-01

    We present a class of elastic structures which exhibit collective buckling in 3D, and create these by a 3D printing/moulding technique. Our structures consist of cubic lattice of anisotropic unit cells, and we show that their mechanical properties are programmable via the orientation of these unit cells.

  6. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  7. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  8. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  9. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  10. Crystal ball gazing

    NASA Technical Reports Server (NTRS)

    Gettys, Jim

    1992-01-01

    Over the last seven years, the CPU on my desk has increased speed by two orders of magnitude, from around 1 MIP to more than 100 MIPS; more important is that it is about as fast as any uniprocessor of any type available for any price, for compute bound problems. Memory on the system is also about 100 times as big, while disk is only about 10 times as big. Local network and I/O performance have increased greatly, though not quite at the same rate as processor speed. More important, I will argue, is that the CPU's address space is 64 bits, rather than 32 bits, allowing us to rethink some time honored presumptions. The Internet has gone from a few hundred machines to a million, and now have grown to span the entire globe, and wide area networks have now becoming commercial services. 'PC's' are now real computers, bringing what was top of the line computing capability to the masses only a few years behind the leading edge. So even a year or two from now, we can anticipate commonplace desktop machines running at speeds hundreds of MIPS, with main memories in the hundreds of megabytes to a gigabyte, able to draw millions of vectors/second, and all capable of some reasonable 3D graphics. And only a few years later, this will be the $1500 PC. So the 1990's certainly brings: 64 bit processors becoming standard; BIP/BFLOP class uniprocessors; large scale multiprocessors for special purpose applications; I/O as the most significant computer engineering problem; Hierarchical data servers in everyday use; routine access to archived data around the world; and what else? What do systems such as those we will have this decade imply to those building data analysis systems today? Many of the presumptions of the 1970's and 1980's need to be reexamined in the light of 1990's technology.

  11. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  12. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  13. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  14. Multi-view and 3D deformable part models.

    PubMed

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  15. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  18. Tracking earthquake source evolution in 3-D

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Gorbatov, A.; Spiliopoulos, S.

    2014-08-01

    Starting from the hypocentre, the point of initiation of seismic energy, we seek to estimate the subsequent trajectory of the points of emission of high-frequency energy in 3-D, which we term the `evocentres'. We track these evocentres as a function of time by energy stacking for putative points on a 3-D grid around the hypocentre that is expanded as time progresses, selecting the location of maximum energy release as a function of time. The spatial resolution in the neighbourhood of a target point can be simply estimated by spatial mapping using the properties of isochrons from the stations. The mapping of a seismogram segment to space is by inverse slowness, and thus more distant stations have a broader spatial contribution. As in hypocentral estimation, the inclusion of a wide azimuthal distribution of stations significantly enhances 3-D capability. We illustrate this approach to tracking source evolution in 3-D by considering two major earthquakes, the 2007 Mw 8.1 Solomons islands event that ruptured across a plate boundary and the 2013 Mw 8.3 event 610 km beneath the Sea of Okhotsk. In each case we are able to provide estimates of the evolution of high-frequency energy that tally well with alternative schemes, but also to provide information on the 3-D characteristics that is not available from backprojection from distant networks. We are able to demonstrate that the major characteristics of event rupture can be captured using just a few azimuthally distributed stations, which opens the opportunity for the approach to be used in a rapid mode immediately after a major event to provide guidance for, for example tsunami warning for megathrust events.

  19. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  20. Profile of Gaze Dysfunction following Cerebrovascular Accident.

    PubMed

    Rowe, Fiona J; Wright, David; Brand, Darren; Jackson, Carole; Harrison, Shirley; Maan, Tallat; Scott, Claire; Vogwell, Linda; Peel, Sarah; Akerman, Nicola; Dodridge, Caroline; Howard, Claire; Shipman, Tracey; Sperring, Una; Macdiarmid, Sonia; Freeman, Cicely

    2013-01-01

    Aim. To evaluate the profile of ocular gaze abnormalities occurring following stroke. Methods. Prospective multicentre cohort trial. Standardised referral and investigation protocol including assessment of visual acuity, ocular alignment and motility, visual field, and visual perception. Results. 915 patients recruited: mean age 69.18 years (SD 14.19). 498 patients (54%) were diagnosed with ocular motility abnormalities. 207 patients had gaze abnormalities including impaired gaze holding (46), complete gaze palsy (23), horizontal gaze palsy (16), vertical gaze palsy (17), Parinaud's syndrome (8), INO (20), one and half syndrome (3), saccadic palsy (28), and smooth pursuit palsy (46). These were isolated impairments in 50% of cases and in association with other ocular abnormalities in 50% including impaired convergence, nystagmus, and lid or pupil abnormalities. Areas of brain stroke were frequently the cerebellum, brainstem, and diencephalic areas. Strokes causing gaze dysfunction also involved cortical areas including occipital, parietal, and temporal lobes. Symptoms of diplopia and blurred vision were present in 35%. 37 patients were discharged, 29 referred, and 141 offered review appointments. 107 reviewed patients showed full recovery (4%), partial improvement (66%), and static gaze dysfunction (30%). Conclusions. Gaze dysfunction is common following stroke. Approximately one-third of patients complain of visual symptoms, two thirds show some improvement in ocular motility.

  1. How active gaze informs the hand in sequential pointing movements.

    PubMed

    Wilmut, Kate; Wann, John P; Brown, Janice H

    2006-11-01

    Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

  2. 3D Scan Systems Integration

    DTIC Science & Technology

    2007-11-02

    AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 5 Feb 98 4. TITLE AND SUBTITLE 3D Scan Systems Integration REPORT TYPE AND DATES COVERED...2-89) Prescribed by ANSI Std. Z39-1 298-102 [ EDO QUALITY W3PECTEDI DLA-ARN Final Report for US Defense Logistics Agency on DDFG-T2/P3: 3D...SCAN SYSTEMS INTEGRATION Contract Number SPO100-95-D-1014 Contractor Ohio University Delivery Order # 0001 Delivery Order Title 3D Scan Systems

  3. Gaze contingent hologram synthesis for holographic head-mounted display

    NASA Astrophysics Data System (ADS)

    Hong, Jisoo; Kim, Youngmin; Hong, Sunghee; Shin, Choonsung; Kang, Hoonjong

    2016-03-01

    Development of display and its related technologies provides immersive visual experience with head-mounted-display (HMD). However, most available HMDs provide 3D perception only by stereopsis, lack of accommodation depth cues. Recently, holographic HMD (HHMD) arises as one viable option to resolve this problem because hologram is known to provide full set of depth cues including accommodation. Moreover, by virtue of increasing computational power, hologram synthesis from 3D object represented by point cloud can be calculated in real time even with rigorous Rayleigh-Sommerfeld diffraction formula. However, in HMD, rapid gaze change of the user requires much faster refresh rate, which means that much faster hologram synthesis is indispensable in HHMD. Because the visual acuity falls off in the visual periphery, we propose here to accelerate synthesizing hologram by differentiating density of point cloud projected on the screen. We classify the screen into multiple layers which are concentric circles with different radii, where the center is aligned with gaze of user. Layer with smaller radius is closer to the region of interest, hence, assigned with higher density of point cloud. Because the computation time is directly related to the number of points in point cloud, we can accelerate synthesizing hologram by lowering density of point cloud in the visual periphery. Cognitive study reveals that user cannot discriminate those degradation in the visual periphery if the parameters are properly designed. Prototype HHMD system will be provided for verifying the feasibility of our method, and detailed design scheme will be discussed.

  4. 3D polymer scaffold arrays.

    PubMed

    Simon, Carl G; Yang, Yanyin; Dorsey, Shauna M; Ramalingam, Murugan; Chatterjee, Kaushik

    2011-01-01

    We have developed a combinatorial platform for fabricating tissue scaffold arrays that can be used for screening cell-material interactions. Traditional research involves preparing samples one at a time for characterization and testing. Combinatorial and high-throughput (CHT) methods lower the cost of research by reducing the amount of time and material required for experiments by combining many samples into miniaturized specimens. In order to help accelerate biomaterials research, many new CHT methods have been developed for screening cell-material interactions where materials are presented to cells as a 2D film or surface. However, biomaterials are frequently used to fabricate 3D scaffolds, cells exist in vivo in a 3D environment and cells cultured in a 3D environment in vitro typically behave more physiologically than those cultured on a 2D surface. Thus, we have developed a platform for fabricating tissue scaffold libraries where biomaterials can be presented to cells in a 3D format.

  5. Purposeful gazing in active vision through phase-based disparity and dynamic vergence control

    NASA Astrophysics Data System (ADS)

    Wu, Liwei; Marefat, Michael M.

    1994-10-01

    In this research we propose solutions to the problems involved in gaze stabilization of a binocular active vision system, i.e., vergence error extraction, and vergence servo control. Gazing is realized by decreasing the disparity which represents the vergence error. A Fourier transform based approach that robustly and efficiently estimates vergence disparity is developed for holding gaze on selected visual target. It is shown that this method has certain advantages over existing approaches. Our work also points out that vision sensor based vergence control system is a dual sampling rate system. Feedback information prediction and dynamic vision-based self-tuning control strategy are investigated to implement vergence control. Experiments on the gaze stabilization using the techniques developed in this paper are performed.

  6. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  7. Reduced gaze aftereffects are related to difficulties categorising gaze direction in children with autism

    PubMed Central

    Pellicano, Elizabeth; Rhodes, Gillian; Calder, Andrew J.

    2013-01-01

    Perceptual mechanisms are generally flexible or “adaptive”, as evidenced by perceptual aftereffects: distortions that arise following exposure to a stimulus. We examined whether adaptive mechanisms for coding gaze direction are atypical in children diagnosed with an autism spectrum condition. Twenty-four typical children and 24 children with autism, of similar age and ability, were administered a developmentally sensitive eye-gaze adaptation task. In the pre-adaptation phase, children judged whether target faces showing subtle deviations in eye-gaze direction were looking leftwards, rightwards or straight-ahead. Next, children were adapted to faces gazing in one consistent direction (25° leftwards/rightwards) before categorising the direction of the target faces again. Children with autism showed difficulties in judging whether subtle deviations in gaze were directed to the left, right or straight-ahead relative to typical children. Although adaptation to leftward or rightward gaze resulted in reduced sensitivity to gaze on the adapted side for both groups, the aftereffect was significantly reduced in children with autism. Furthermore, the magnitude of children's gaze aftereffects was positively related to their ability to categorise gaze direction. These results show that the mechanisms coding gaze are less flexible in autism and offer a potential new explanation for these children's difficulties discriminating subtle deviations in gaze direction. PMID:23583965

  8. Eye gaze as relational evaluation: averted eye gaze leads to feelings of ostracism and relational devaluation.

    PubMed

    Wirth, James H; Sacco, Donald F; Hugenberg, Kurt; Williams, Kipling D

    2010-07-01

    Eye gaze is often a signal of interest and, when noticed by others, leads to mutual and directional gaze. However, averting one's eye gaze toward an individual has the potential to convey a strong interpersonal evaluation. The averting of eye gaze is the most frequently used nonverbal cue to indicate the silent treatment, a form of ostracism. The authors argue that eye gaze can signal the relational value felt toward another person. In three studies, participants visualized interacting with an individual displaying averted or direct eye gaze. Compared to receiving direct eye contact, participants receiving averted eye gaze felt ostracized, signaled by thwarted basic need satisfaction, reduced explicit and implicit self-esteem, lowered relational value, and increased temptations to act aggressively toward the interaction partner.

  9. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  10. Follow my eyes: the gaze of politicians reflexively captures the gaze of ingroup voters.

    PubMed

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention.

  11. Development of Gaze Aversion in Children.

    ERIC Educational Resources Information Center

    Scheman, Judith D.; Lockard, Joan S.

    1979-01-01

    An observer stared continually at each of 573 children who passed along a definable pathway in a large shopping center. Most infants did not make eye contact with the observer, the majority of toddlers established eye contact but did not gaze avert, and the preponderance of school-age children gaze averted. (Author/JMB)

  12. Culture and Listeners' Gaze Responses to Stuttering

    ERIC Educational Resources Information Center

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  13. Assessing Self-Awareness through Gaze Agency

    PubMed Central

    Crespi, Sofia Allegra; de’Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one’s own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers’ self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity–difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development. PMID:27812138

  14. Gaze leading is associated with liking.

    PubMed

    Grynszpan, Ouriel; Martin, Jean-Claude; Fossati, Philippe

    2017-02-01

    Gaze plays a pivotal role in human communication, especially for coordinating attention. The ability to guide the gaze orientation of others forms the backbone of joint attention. Recent research has raised the possibility that gaze following behaviors could induce liking. The present study seeks to investigate this hypothesis. We designed two physically different human avatars that could follow the gaze of users via eye-tracking technology. In a preliminary experiment, 20 participants assessed the baseline appeal of the two avatars and confirmed that the avatars differed in this respect. In the main experiment, we compared how 19 participants rated the two avatars in terms of pleasantness, trustworthiness and closeness when the avatars were following their gaze versus when the avatar generated gaze movements autonomously. Although the same avatar as in the preliminary experiment was rated more favorably, the pleasantness attributed to the two avatars increased when they followed the gaze of the participants. This outcome provides evidence that gaze following fosters liking independently of the baseline appeal of the individual.

  15. The Development of Mentalistic Gaze Understanding

    ERIC Educational Resources Information Center

    Doherty, Martin J.

    2006-01-01

    Very young infants are sensitive to and follow other people's gaze. By 18 months children, like chimpanzees, apparently represent the spatial relationship between viewer and object viewed: they can follow eye-direction alone, and react appropriately if the other's gaze is blocked by occluding barriers. This paper assesses when children represent…

  16. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  17. A monolithic 3D-0D coupled closed-loop model of the heart and the vascular system: Experiment-based parameter estimation for patient-specific cardiac mechanics.

    PubMed

    Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W

    2016-10-15

    A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2  ×  2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low

  18. GazeAppraise v. 0.1

    SciTech Connect

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel; Matzen, Laura; McNamara, Laura

    2016-08-01

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths. An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.

  19. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  20. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  1. Macrophage podosomes go 3D.

    PubMed

    Van Goethem, Emeline; Guiet, Romain; Balor, Stéphanie; Charrière, Guillaume M; Poincloux, Renaud; Labrousse, Arnaud; Maridonneau-Parini, Isabelle; Le Cabec, Véronique

    2011-01-01

    Macrophage tissue infiltration is a critical step in the immune response against microorganisms and is also associated with disease progression in chronic inflammation and cancer. Macrophages are constitutively equipped with specialized structures called podosomes dedicated to extracellular matrix (ECM) degradation. We recently reported that these structures play a critical role in trans-matrix mesenchymal migration mode, a protease-dependent mechanism. Podosome molecular components and their ECM-degrading activity have been extensively studied in two dimensions (2D), but yet very little is known about their fate in three-dimensional (3D) environments. Therefore, localization of podosome markers and proteolytic activity were carefully examined in human macrophages performing mesenchymal migration. Using our gelled collagen I 3D matrix model to obligate human macrophages to perform mesenchymal migration, classical podosome markers including talin, paxillin, vinculin, gelsolin, cortactin were found to accumulate at the tip of F-actin-rich cell protrusions together with β1 integrin and CD44 but not β2 integrin. Macrophage proteolytic activity was observed at podosome-like protrusion sites using confocal fluorescence microscopy and electron microscopy. The formation of migration tunnels by macrophages inside the matrix was accomplished by degradation, engulfment and mechanic compaction of the matrix. In addition, videomicroscopy revealed that 3D F-actin-rich protrusions of migrating macrophages were as dynamic as their 2D counterparts. Overall, the specifications of 3D podosomes resembled those of 2D podosome rosettes rather than those of individual podosomes. This observation was further supported by the aspect of 3D podosomes in fibroblasts expressing Hck, a master regulator of podosome rosettes in macrophages. In conclusion, human macrophage podosomes go 3D and take the shape of spherical podosome rosettes when the cells perform mesenchymal migration. This work

  2. 3D Printed Bionic Nanodevices.

    PubMed

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  3. Petal, terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The metallic object at lower right is part of the lander's low-gain antenna. This image is part of a 3D 'monster

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  5. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection

    PubMed Central

    Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole

    2016-01-01

    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048

  6. The World of 3-D.

    ERIC Educational Resources Information Center

    Mayshark, Robin K.

    1991-01-01

    Students explore three-dimensional properties by creating red and green wall decorations related to Christmas. Students examine why images seem to vibrate when red and green pieces are small and close together. Instructions to conduct the activity and construct 3-D glasses are given. (MDH)

  7. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  8. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  9. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  10. 3D fascicle orientations in triceps surae.

    PubMed

    Rana, Manku; Hamarneh, Ghassan; Wakeling, James M

    2013-07-01

    The aim of this study was to determine the three-dimensional (3D) muscle fascicle architecture in human triceps surae muscles at different contraction levels and muscle lengths. Six male subjects were tested for three contraction levels (0, 30, and 60% of maximal voluntary contraction) and four ankle angles (-15, 0, 15, and 30° of plantar flexion), and the muscles were imaged with B-mode ultrasound coupled to 3D position sensors. 3D fascicle orientations were represented in terms of pennation angle relative to the major axis of the muscle and azimuthal angle (a new architectural parameter introduced in this study representing the radial angle around the major axis). 3D orientations of the fascicles, and the sheets along which they lie, were regionalized in all the three muscles (medial and lateral gastrocnemius and the soleus) and changed significantly with contraction level and ankle angle. Changes in the azimuthal angle were of similar magnitude to the changes in pennation angle. The 3D information was used for an error analysis to determine the errors in predictions of pennation that would occur in purely two-dimensional studies. A comparison was made for assessing pennation in the same plane for different contraction levels, or for adjusting the scanning plane orientation for different contractions: there was no significant difference between the two simulated scanning conditions for the gastrocnemii; however, a significant difference of 4.5° was obtained for the soleus. Correct probe orientation is thus more critical during estimations of pennation for the soleus than the gastrocnemii due to its more complex fascicle arrangement.

  11. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  12. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  13. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI.

  14. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  15. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  16. Constructing the Medical Humanities gaze.

    PubMed

    Annoni, Marco; Schiavone, Giuseppe; Chiapperino, Luca; Boniolo, Giovanni

    2012-12-31

    In the last few decades genomics has completely reshaped the way in which patients and physicians experience and make sense of illness. In this paper we build upon a real case - namely that of breast cancer genetic testing - in order to point to the shortcomings of the paradigm currently driving healthcare delivery. In particular, we put forward a viable analytical model for the construction of a proper decisional process broadening the scope of medical gaze onto human experience of illness. This model revolves around four main conceptual axes: (i) communicating information; (ii) informing decisions; (iii) respecting narratives; (iv) empowering decision-making. These four kernels, we argue, map precisely onto the main pitfalls of the model presently dealing with genetic testing provision. Medical Humanities, we conclude, ought to play a pivotal role in constructing the environment for competent decision-making, autonomous self-determination and respectful narritivization of one's own life.

  17. Gaze cueing by pareidolia faces

    PubMed Central

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process. PMID:25165505

  18. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  19. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  20. Comparing swimsuits in 3D.

    PubMed

    van Geer, Erik; Molenbroek, Johan; Schreven, Sander; deVoogd-Claessen, Lenneke; Toussaint, Huib

    2012-01-01

    In competitive swimming, suits have become more important. These suits influence friction, pressure and wave drag. Friction drag is related to the surface properties whereas both pressure and wave drag are greatly influenced by body shape. To find a relationship between the body shape and the drag, the anthropometry of several world class female swimmers wearing different suits was accurately defined using a 3D scanner and traditional measuring methods. The 3D scans delivered more detailed information about the body shape. On the same day the swimmers did performance tests in the water with the tested suits. Afterwards the result of the performance tests and the differences found in body shape was analyzed to determine the deformation caused by a swimsuit and its effect on the swimming performance. Although the amount of data is limited because of the few test subjects, there is an indication that the deformation of the body influences the swimming performance.

  1. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  2. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  3. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  4. GPU-Accelerated Denoising in 3D (GD3D)

    SciTech Connect

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer the second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.

  5. Social gaze cueing to auditory locations.

    PubMed

    Newport, R; Howarth, S

    2009-04-01

    Spatial attention is oriented by social visual cues: targets appearing at cued (gazed-at) locations are detected more rapidly than those appearing at uncued locations. The current studies provide evidence that social gaze directs attention to auditory as well as visual targets at cued locations. For auditory target detection the effect lasted from 300 to 1,005 ms while for discrimination the effect was restricted to 600 ms. Improved performance at 600-ms stimulus onset asynchrony was observed across all experiments and may reflect an optimal processing window for social stimuli. In addition, the orienting of attention by gaze was impaired by the presentation of negative faces. These experiments further demonstrate the unique and cross-modal nature of social gaze cueing.

  6. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  7. Coordinating spatial referencing using shared gaze.

    PubMed

    Neider, Mark B; Chen, Xin; Dickinson, Christopher A; Brennan, Susan E; Zelinsky, Gregory J

    2010-10-01

    To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.

  8. Inferential modeling of 3D chromatin structure.

    PubMed

    Wang, Siyu; Xu, Jinbo; Zeng, Jianyang

    2015-04-30

    For eukaryotic cells, the biological processes involving regulatory DNA elements play an important role in cell cycle. Understanding 3D spatial arrangements of chromosomes and revealing long-range chromatin interactions are critical to decipher these biological processes. In recent years, chromosome conformation capture (3C) related techniques have been developed to measure the interaction frequencies between long-range genome loci, which have provided a great opportunity to decode the 3D organization of the genome. In this paper, we develop a new Bayesian framework to derive the 3D architecture of a chromosome from 3C-based data. By modeling each chromosome as a polymer chain, we define the conformational energy based on our current knowledge on polymer physics and use it as prior information in the Bayesian framework. We also propose an expectation-maximization (EM) based algorithm to estimate the unknown parameters of the Bayesian model and infer an ensemble of chromatin structures based on interaction frequency data. We have validated our Bayesian inference approach through cross-validation and verified the computed chromatin conformations using the geometric constraints derived from fluorescence in situ hybridization (FISH) experiments. We have further confirmed the inferred chromatin structures using the known genetic interactions derived from other studies in the literature. Our test results have indicated that our Bayesian framework can compute an accurate ensemble of 3D chromatin conformations that best interpret the distance constraints derived from 3C-based data and also agree with other sources of geometric constraints derived from experimental evidence in the previous studies. The source code of our approach can be found in https://github.com/wangsy11/InfMod3DGen.

  9. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  10. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    PubMed

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  11. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  12. 3D Nanostructuring of Semiconductors

    NASA Astrophysics Data System (ADS)

    Blick, Robert

    2000-03-01

    Modern semiconductor technology allows to machine devices on the nanometer scale. I will discuss the current limits of the fabrication processes, which enable the definition of single electron transistors with dimensions down to 8 nm. In addition to the conventional 2D patterning and structuring of semiconductors, I will demonstrate how to apply 3D nanostructuring techniques to build freely suspended single-crystal beams with lateral dimension down to 20 nm. In transport measurements in the temperature range from 30 mK up to 100 K these nano-crystals are characterized regarding their electronic as well as their mechanical properties. Moreover, I will present possible applications of these devices.

  13. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  14. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  15. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  16. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  17. Gaze-Contingent Motor Channelling, haptic constraints and associated cognitive demand for robotic MIS.

    PubMed

    Mylonas, George P; Kwok, Ka-Wai; James, David R C; Leff, Daniel; Orihuela-Espina, Felipe; Darzi, Ara; Yang, Guang-Zhong

    2012-04-01

    The success of MIS is coupled with an increasing demand on surgeons' manual dexterity and visuomotor coordination due to the complexity of instrument manipulations. The use of master-slave surgical robots has avoided many of the drawbacks of MIS, but at the same time, has increased the physical separation between the surgeon and the patient. Tissue deformation combined with restricted workspace and visibility of an already cluttered environment can raise critical issues related to surgical precision and safety. Reconnecting the essential visuomotor sensory feedback is important for the safe practice of robot-assisted MIS procedures. This paper introduces a novel gaze-contingent framework for real-time haptic feedback and virtual fixtures by transforming visual sensory information into physical constraints that can interact with the motor sensory channel. We demonstrate how motor tracking of deforming tissue can be made more effective and accurate through the concept of Gaze-Contingent Motor Channelling. The method is also extended to 3D by introducing the concept of Gaze-Contingent Haptic Constraints where eye gaze is used to dynamically prescribe and update safety boundaries during robot-assisted MIS without prior knowledge of the soft-tissue morphology. Initial validation results on both simulated and robot assisted phantom procedures demonstrate the potential clinical value of the technique. In order to assess the associated cognitive demand of the proposed concepts, functional Near-Infrared Spectroscopy is used and preliminary results are discussed.

  18. Re-Encountering Individuals Who Previously Engaged in Joint Gaze Modulates Subsequent Gaze Cueing

    ERIC Educational Resources Information Center

    Dalmaso, Mario; Edwards, S. Gareth; Bayliss, Andrew P.

    2016-01-01

    We assessed the extent to which previous experience of joint gaze with people (i.e., looking toward the same object) modulates later gaze cueing of attention elicited by those individuals. Participants in Experiments 1 and 2a/b first completed a saccade/antisaccade task while a to-be-ignored face either looked at, or away from, the participants'…

  19. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  20. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  1. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  2. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  3. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at lower left in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. Martian terrain & airbags - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Portions of the lander's deflated airbags and a petal are at the lower area of this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This image is part of a 3D 'monster' panorama of the area surrounding the landing site.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. 3D structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Dougherty, William M.; Goodwin, Paul C.

    2011-03-01

    Three-dimensional structured illumination microscopy achieves double the lateral and axial resolution of wide-field microscopy, using conventional fluorescent dyes, proteins and sample preparation techniques. A three-dimensional interference-fringe pattern excites the fluorescence, filling in the "missing cone" of the wide field optical transfer function, thereby enabling axial (z) discrimination. The pattern acts as a spatial carrier frequency that mixes with the higher spatial frequency components of the image, which usually succumb to the diffraction limit. The fluorescence image encodes the high frequency content as a down-mixed, moiré-like pattern. A series of images is required, wherein the 3D pattern is shifted and rotated, providing down-mixed data for a system of linear equations. Super-resolution is obtained by solving these equations. The speed with which the image series can be obtained can be a problem for the microscopy of living cells. Challenges include pattern-switching speeds, optical efficiency, wavefront quality and fringe contrast, fringe pitch optimization, and polarization issues. We will review some recent developments in 3D-SIM hardware with the goal of super-resolved z-stacks of motile cells.

  6. Interactions between gaze-evoked blinks and gaze shifts in monkeys.

    PubMed

    Gandhi, Neeraj J

    2012-02-01

    Rapid eyelid closure, or a blink, often accompanies head-restrained and head-unrestrained gaze shifts. This study examines the interactions between such gaze-evoked blinks and gaze shifts in monkeys. Blink probability increases with gaze amplitude and at a faster rate for head-unrestrained movements. Across animals, blink likelihood is inversely correlated with the average gaze velocity of large-amplitude control movements. Gaze-evoked blinks induce robust perturbations in eye velocity. Peak and average velocities are reduced, duration is increased, but accuracy is preserved. The temporal features of the perturbation depend on factors such as the time of blink relative to gaze onset, inherent velocity kinematics of control movements, and perhaps initial eye-in-head position. Although variable across animals, the initial effect is a reduction in eye velocity, followed by a reacceleration that yields two or more peaks in its waveform. Interestingly, head velocity is not attenuated; instead, it peaks slightly later and with a larger magnitude. Gaze latency is slightly reduced on trials with gaze-evoked blinks, although the effect was more variable during head-unrestrained movements; no reduction in head latency is observed. Preliminary data also demonstrate a similar perturbation of gaze-evoked blinks during vertical saccades. The results are compared with previously reported effects of reflexive blinks (evoked by air-puff delivered to one eye or supraorbital nerve stimulation) and discussed in terms of effects of blinks on saccadic suppression, neural correlates of the altered eye velocity signals, and implications on the hypothesis that the attenuation in eye velocity is produced by a head movement command.

  7. Gaze orienting in dynamic visual double steps.

    PubMed

    Vliegen, Joyce; Van Grootel, Tom J; Van Opstal, A John

    2005-12-01

    Visual stimuli are initially represented in a retinotopic reference frame. To maintain spatial accuracy of gaze (i.e., eye in space) despite intervening eye and head movements, the visual input could be combined with dynamic feedback about ongoing gaze shifts. Alternatively, target coordinates could be updated in advance by using the preprogrammed gaze-motor command ("predictive remapping"). So far, previous experiments have not dissociated these possibilities. Here we study whether the visuomotor system accounts for saccadic eye-head movements that occur during target presentation. In this case, the system has to deal with fast dynamic changes of the retinal input and with highly variable changes in relative eye and head movements that cannot be preprogrammed by the gaze control system. We performed visual-visual double-step experiments in which a brief (50-ms) stimulus was presented during a saccadic eye-head gaze shift toward a previously flashed visual target. Our results show that gaze shifts remain accurate under these dynamic conditions, even for stimuli presented near saccade onset, and that eyes and head are driven in oculocentric and craniocentric coordinates, respectively. These results cannot be explained by a predictive remapping scheme. We propose that the visuomotor system adequately processes dynamic changes in visual input that result from self-initiated gaze shifts, to construct a stable representation of visual targets in an absolute, supraretinal (e.g., world) reference frame. Predictive remapping may subserve transsaccadic integration, thus enabling perception of a stable visual scene despite eye movements, whereas dynamic feedback ensures accurate actions (e.g., eye-head orienting) to a selected goal.

  8. The PRISM3D paleoenvironmental reconstruction

    USGS Publications Warehouse

    Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.

    2010-01-01

    The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may

  9. A 3-D SAR approach to IFSAR processing

    SciTech Connect

    DOERRY,ARMIN W.; BICKEL,DOUGLAS L.

    2000-03-01

    Interferometric SAR (IFSAR) can be shown to be a special case of 3-D SAR image formation. In fact, traditional IFSAR processing results in the equivalent of merely a super-resolved, under-sampled, 3-D SAR image. However, when approached as a 3-D SAR problem, a number of IFSAR properties and anomalies are easily explained. For example, IFSAR decorrelation with height is merely ordinary migration in 3-D SAR. Consequently, treating IFSAR as a 3-D SAR problem allows insight and development of proper motion compensation techniques and image formation operations to facilitate optimal height estimation. Furthermore, multiple antenna phase centers and baselines are easily incorporated into this formulation, providing essentially a sparse array in the elevation dimension. This paper shows the Polar Format image formation algorithm extended to 3 dimensions, and then proceeds to apply it to the IFSAR collection geometry. This suggests a more optimal reordering of the traditional IFSAR processing steps.

  10. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  11. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  12. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-06

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  13. Evidence for on-line visual guidance during saccadic gaze shifts.

    PubMed Central

    Grealy, M A; Craig, C M; Lee, D N

    1999-01-01

    Rapid orientating movements of the eyes are believed to be controlled ballistically. The mechanism underlying this control is thought to involve a comparison between the desired displacement of the eye and an estimate of its actual position (obtained from the integration of the eye velocity signal). This study shows, however, that under certain circumstances fast gaze movements may be controlled quite differently and may involve mechanisms which use visual information to guide movements prospectively. Subjects were required to make large gaze shifts in yaw towards a target whose location and motion were unknown prior to movement onset. Six of those tested demonstrated remarkable accuracy when making gaze shifts towards a target that appeared during their ongoing movement. In fact their level of accuracy was not significantly different from that shown when they performed a 'remembered' gaze shift to a known stationary target (F3,15 = 0.15, p > 0.05). The lack of a stereotypical relationship between the skew of the gaze velocity profile and movement duration indicates that on-line modifications were being made. It is suggested that a fast route from the retina to the superior colliculus could account for this behaviour and that models of oculomotor control need to be updated. PMID:10518326

  14. Quasi 3D dispersion experiment

    NASA Astrophysics Data System (ADS)

    Bakucz, P.

    2003-04-01

    This paper studies the problem of tracer dispersion in a coloured fluid flowing through a two-phase 3D rough channel-system in a 40 cm*40 cm plexi-container filled by homogen glass fractions and colourless fluid. The unstable interface between the driving coloured fluid and the colourless fluid develops viscous fingers with a fractal structure at high capillary number. Five two-dimensional fractal fronts have been observed at the same time using four cameras along the vertical side-walls and using one camera located above the plexi-container. In possession of five fronts the spatial concentration contours are determined using statistical models. The concentration contours are self-affine fractal curves with a fractal dimension D=2.19. This result is valid for disperison at high Péclet numbers.

  15. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  16. 3D Printed Shelby Cobra

    ScienceCinema

    Love, Lonnie

    2016-11-02

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  17. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  18. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  19. Visual Foraging With Fingers and Eye Gaze.

    PubMed

    Jóhannesson, Ómar I; Thornton, Ian M; Smith, Irene J; Chetverikov, Andrey; Kristjánsson, Árni

    2016-03-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  20. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  1. Social orienting in gaze leading: a mechanism for shared attention

    PubMed Central

    Edwards, S. Gareth; Stephenson, Lisa J.; Dalmaso, Mario; Bayliss, Andrew P.

    2015-01-01

    Here, we report a novel social orienting response that occurs after viewing averted gaze. We show, in three experiments, that when a person looks from one location to an object, attention then shifts towards the face of an individual who has subsequently followed the person's gaze to that same object. That is, contrary to ‘gaze following’, attention instead orients in the opposite direction to observed gaze and towards the gazing face. The magnitude of attentional orienting towards a face that ‘follows’ the participant's gaze is also associated with self-reported autism-like traits. We propose that this gaze leading phenomenon implies the existence of a mechanism in the human social cognitive system for detecting when one's gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction. PMID:26180071

  2. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  3. Children with ASD Can Use Gaze to Map New Words

    ERIC Educational Resources Information Center

    Bean Ellawadi, Allison; McGregor, Karla K.

    2016-01-01

    Background: The conclusion that children with autism spectrum disorders (ASD) do not use eye gaze in the service of word learning is based on one-trial studies. Aims: To determine whether children with ASD come to use gaze in the service of word learning when given multiple trials with highly reliable eye-gaze cues. Methods & Procedures:…

  4. 3D Frame Buffers For Interactive Analysis Of 3D Data

    NASA Astrophysics Data System (ADS)

    Hunter, Gregory M.

    1984-10-01

    Two-dimensional data such as photos, X rays, various types of satellite images, sonar, radar, seismic plots, etc., in many cases must be analyzed using frame buffers for purposes of medical diagnoses, crop estimates, mineral exploration, and so forth. In many cases the same types of sensors used to gather such samples in two dimensions can gather 3D data for even more effective analysis. Just as 2D arrays of data can be analyzed using frame buffers, three-dimensional data can be analyzed using SOLIDS-BUFFEPmemories. Image processors deal with samples from two-dimensional arrays, and are based on frame buffers. The SOLIDS PROCESSOR system, deals with samples from a three-dimensional volume, or solid, and is based on a 3D frame buffer. This paper focuses upon the SOLIDS-BUFFER system, as used in the INSIGHT SOLIDS-PROCESSOR system from Phoenix Data Systems.

  5. 3D Tracking via Shoe Sensing

    PubMed Central

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-01-01

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. PMID:27801839

  6. 3D geometry-based quantification of colocalizations in multichannel 3D microscopy images of human soft tissue tumors.

    PubMed

    Wörz, Stefan; Sander, Petra; Pfannmöller, Martin; Rieker, Ralf J; Joos, Stefan; Mechtersheimer, Gunhild; Boukamp, Petra; Lichter, Peter; Rohr, Karl

    2010-08-01

    We introduce a new model-based approach for automatic quantification of colocalizations in multichannel 3D microscopy images. The approach uses different 3D parametric intensity models in conjunction with a model fitting scheme to localize and quantify subcellular structures with high accuracy. The central idea is to determine colocalizations between different channels based on the estimated geometry of the subcellular structures as well as to differentiate between different types of colocalizations. A statistical analysis was performed to assess the significance of the determined colocalizations. This approach was used to successfully analyze about 500 three-channel 3D microscopy images of human soft tissue tumors and controls.

  7. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  8. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  9. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  10. Quasi-3D Algorithm in Multi-scale Modeling Framework

    NASA Astrophysics Data System (ADS)

    Jung, J.; Arakawa, A.

    2008-12-01

    As discussed in the companion paper by Arakawa and Jung, the Quasi-3D (Q3D) Multi-scale Modeling Framework (MMF) is a 4D estimation/prediction framework that combines a GCM with a 3D anelastic vector vorticity equation model (VVM) applied to a Q3D network of horizontal grid points. This paper presents an outline of the recently revised Q3D algorithm and a highlight of the results obtained by application of the algorithm to an idealized model setting. The Q3D network of grid points consists of two sets of grid-point arrays perpendicular to each other. For a scalar variable, for example, each set consists of three parallel rows of grid points. Principal and supplementary predictions are made on the central and the two adjacent rows, respectively. The supplementary prediction is to allow the principal prediction be three-dimensional at least to the second-order accuracy. To accommodate a higher-order accuracy and to make the supplementary predictions formally three-dimensional, a few rows of ghost points are added at each side of the array. Values at these ghost points are diagnostically determined by a combination of statistical estimation and extrapolation. The basic structure of the estimation algorithm is determined in view of the global stability of Q3D advection. The algorithm is calibrated using the statistics of past data at and near the intersections of the two sets of grid- point arrays. Since the CRM in the Q3D MMF extends beyond individual GCM boxes, the CRM can be a GCM by itself. However, it is better to couple the CRM with the GCM because (1) the CRM is a Q3D CRM based on a highly anisotropic network of grid points and (2) coupling with a GCM makes it more straightforward to inherit our experience with the conventional GCMs. In the coupled system we have selected, prediction of thermdynamic variables is almost entirely done by the Q3D CRM with no direct forcing by the GCM. The coupling of the dynamics between the two components is through mutual

  11. 3-D radial gravity gradient inversion

    NASA Astrophysics Data System (ADS)

    Oliveira, Vanderlei C.; Barbosa, Valéria C. F.

    2013-11-01

    We have presented a joint inversion of all gravity-gradient tensor components to estimate the shape of an isolated 3-D geological body located in subsurface. The method assumes the knowledge about the depth to the top and density contrast of the source. The geological body is approximated by an interpretation model formed by an ensemble of vertically juxtaposed 3-D right prisms, each one with known thickness and density contrast. All prisms forming the interpretation model have a polygonal horizontal cross-section that approximates a depth slice of the body. Each polygon defining a horizontal cross-section has the same fixed number of vertices, which are equally spaced from 0° to 360° and have their horizontal locations described in polar coordinates referred to an arbitrary origin inside the polygon. Although the number of vertices forming each polygon is known, the horizontal coordinates of these vertices are unknown. To retrieve a set of juxtaposed depth slices of the body, and consequently, its shape, our method estimates the radii of all vertices and the horizontal Cartesian coordinates of all arbitrary origins defining the geometry of all polygons describing the horizontal cross-sections of the prisms forming the interpretation model. To obtain a stable estimate that fits the observed data, we impose constraints on the shape of the estimated body. These constraints are imposed through the well-known zeroth- and first-order Tikhonov regularizations allowing, for example, the estimate of vertical or dipping bodies. If the data do not have enough in-depth resolution, the proposed inverse method can obtain a set of stable estimates fitting the observed data with different maximum depths. To analyse the data resolution and deal with this possible ambiguity, we plot the ℓ2-norm of the residuals (s) against the estimated volume (vp) produced by a set of estimated sources having different maximum depths. If this s × vp curve (s as a function of vp) shows a well

  12. Lattice radial quantization: 3D Ising

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Fleming, G. T.; Neuberger, H.

    2013-04-01

    Lattice radial quantization is introduced as a nonperturbative method intended to numerically solve Euclidean conformal field theories that can be realized as fixed points of known Lagrangians. As an example, we employ a lattice shaped as a cylinder with a 2D Icosahedral cross-section to discretize dilatations in the 3D Ising model. Using the integer spacing of the anomalous dimensions of the first two descendants (l = 1, 2), we obtain an estimate for η = 0.034 (10). We also observed small deviations from integer spacing for the 3rd descendant, which suggests that a further improvement of our radial lattice action will be required to guarantee conformal symmetry at the Wilson-Fisher fixed point in the continuum limit.

  13. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  14. [3D emulation of epicardium dynamic mapping].

    PubMed

    Lu, Jun; Yang, Cui-Wei; Fang, Zu-Xiang

    2005-03-01

    In order to realize epicardium dynamic mapping of the whole atria, 3-D graphics are drawn with OpenGL. Some source codes are introduced in the paper to explain how to produce, read, and manipulate 3-D model data.

  15. An interactive multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  16. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  17. Laser Based 3D Volumetric Display System

    DTIC Science & Technology

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  18. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  19. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    PubMed

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin; Di Gregorio, Silvana; Malouf, Jorge; Romera, Jordi; Barquero, Luis Miguel Del Rio

    2017-01-01

    The 3D distribution of the cortical and trabecular bone mass in the proximal femur is a critical component in determining fracture resistance that is not taken into account in clinical routine Dual-energy X-ray Absorptiometry (DXA) examination. In this paper, a statistical shape and appearance model together with a 3D-2D registration approach are used to model the femoral shape and bone density distribution in 3D from an anteroposterior DXA projection. A model-based algorithm is subsequently used to segment the cortex and build a 3D map of the cortical thickness and density. Measurements characterising the geometry and density distribution were computed for various regions of interest in both cortical and trabecular compartments. Models and measurements provided by the "3D-DXA" software algorithm were evaluated using a database of 157 study subjects, by comparing 3D-DXA analyses (using DXA scanners from three manufacturers) with measurements performed by Quantitative Computed Tomography (QCT). The mean point-to-surface distance between 3D-DXA and QCT femoral shapes was 0.93 mm. The mean absolute error between cortical thickness and density estimates measured by 3D-DXA and QCT was 0.33 mm and 72 mg/cm(3). Correlation coefficients (R) between the 3D-DXA and QCT measurements were 0.86, 0.93, and 0.95 for the volumetric bone mineral density at the trabecular, cortical, and integral compartments respectively, and 0.91 for the mean cortical thickness. 3D-DXA provides a detailed analysis of the proximal femur, including a separate assessment of the cortical layer and trabecular macrostructure, which could potentially improve osteoporosis management while maintaining DXA as the standard routine modality.

  20. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  1. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  2. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  3. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  4. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  5. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  6. Stabilization of gaze during circular locomotion in light. I. Compensatory head and eye nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. A rhesus and cynomolgus monkey were trained to run around the perimeter of a circular platform in light. We call this "circular locomotion" because forward motion had an angular component. Head and body velocity in space were recorded with angular rate sensors and eye movements with electrooculography (EOG). From these measurements we derived signals related to the angular velocity of the eyes in the head (Eh), of the head on the body (Hb), of gaze on the body (Gb), of the body in space (Bs), of gaze in space (Gs), and of the gain of gaze (Gb/Bs). 2. The monkeys had continuous compensatory nystagmus of the head and eyes while running, which stabilized Gs during the slow phases. The eyes established and maintained compensatory gaze velocities at the beginning and end of the slow phases. The head contributed to gaze velocity during the middle of the slow phases. Slow phase Gb was as high as 250 degrees/s, and targets were fixed for gaze angles as large as 90-140 degrees. 3. Properties of the visual surround affected both the gain and strategy of gaze compensation in the one monkey tested. Gains of Eh ranged from 0.3 to 1.1 during compensatory gaze nystagmus. Gains of Hb varied around 0.3 (0.2-0.7), building to a maximum as Eh dropped while running past sectors of interest. Consistent with predictions, gaze gains varied from below to above unity, when translational and angular body movements with regard to the target were in opposite or the same directions, respectively. 4. Gaze moved in saccadic shifts in the direction of running during quick phases. Most head quick phases were small, and at times the head only paused during an eye quick phase. Eye quick phases were larger, ranging up to 60 degrees. This is larger than quick phases during passive rotation or saccades made with the head fixed. 5. These data indicate that head and eye nystagmus are natural phenomena that support gaze compensation during locomotion. Despite differential utilization of the head and

  7. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  8. Polarimetric 3D integral imaging in photon-starved conditions.

    PubMed

    Carnicer, Artur; Javidi, Bahram

    2015-03-09

    We develop a method for obtaining 3D polarimetric integral images from elemental images recorded in low light illumination conditions. Since photon-counting images are very sparse, calculation of the Stokes parameters and the degree of polarization should be handled carefully. In our approach, polarimetric 3D integral images are generated using the Maximum Likelihood Estimation and subsequently reconstructed by means of a Total Variation Denoising filter. In this way, polarimetric results are comparable to those obtained in conventional illumination conditions. We also show that polarimetric information retrieved from photon starved images can be used in 3D object recognition problems. To the best of our knowledge, this is the first report on 3D polarimetric photon counting integral imaging.

  9. Preliminary investigations on 3D PIC simulation of DPHC structure using NEPTUNE3D code

    NASA Astrophysics Data System (ADS)

    Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Wang, Qiang

    2016-10-01

    Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) structure was chosen to perform a series of fully 3D PIC simulations using NEPTUNE3D codes, massive data ( 200GB) could be acquired and solved in less than 5 hours. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated by comparisons between output magnetic field profiles with or without electron emission. PIC simulation results showed three stages of current transforming process with election emission in DPHC structure, the maximum ( 20%) current loss was 437kA at 15ns, while only 0.46% 0.48% was lost when driving current reached its peak. DPHC structure proved valuable functions during energy transform process in PTS facility, and NEPTUNE3D provided tools to explore this sophisticated physics. Project supported by the National Natural Science Foundation of China, Grant No. 11571293, 11505172.

  10. Gaze Patterns and Audiovisual Speech Enhancement

    ERIC Educational Resources Information Center

    Yi, Astrid; Wong, Willy; Eizenman, Moshe

    2013-01-01

    Purpose: In this study, the authors sought to quantify the relationships between speech intelligibility (perception) and gaze patterns under different auditory-visual conditions. Method: Eleven subjects listened to low-context sentences spoken by a single talker while viewing the face of one or more talkers on a computer display. Subjects either…

  11. Focusing the Gaze: Teacher Interrogation of Practice

    ERIC Educational Resources Information Center

    Nayler, Jennifer M.; Keddie, Amanda

    2007-01-01

    Within an Australian context of diminishing opportunities for equitable educational outcomes, this paper calls for teacher engagement in a "politics of resistance" through their focused gaze in relation to the ways in which they are positioned in their everyday practice. Our belief is that the resultant knowledge might equip teachers to…

  12. The Spectator's Dancing Gaze in Moulin Rouge!

    ERIC Educational Resources Information Center

    Parfitt, Clare

    2005-01-01

    This paper examines the ways in which the choreography of "Moulin Rouge!" offers a range of gazes to the audience through the perspectives of both the characters and the camera itself. Various gendered and imperial/colonial power relationships that occur within the narrative are heightened in the choreography by referring to discourses inscribed…

  13. Helping Children Think: Gaze Aversion and Teaching

    ERIC Educational Resources Information Center

    Phelps, Fiona G.; Doherty-Sneddon, Gwyneth; Warnock, Hannah

    2006-01-01

    Looking away from an interlocutor's face during demanding cognitive activity can help adults answer challenging arithmetic and verbal-reasoning questions (Glenberg, Schroeder, & Robertson, 1998). However, such "gaze aversion" (GA) is poorly applied by 5-year-old school children (Doherty-Sneddon, Bruce, Bonner, Longbotham, & Doyle, 2002). In…

  14. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  15. Affective Priming by Eye Gaze Stimuli: Behavioral and Electrophysiological Evidence

    PubMed Central

    Chen, Tingji; Peltola, Mikko J.; Ranta, Lotta J.; Hietanen, Jari K.

    2016-01-01

    The present study employed the affective priming paradigm and measurements of event-related potentials (ERPs) to investigate implicit affective reactions elicited by gaze stimuli. Participants categorized positive and negative words primed by direct gaze, averted gaze and closed eyes. The behavioral response time (RT) results indicated that direct gaze implicitly elicited more positive affective reactions than did closed eyes. Analyses of the ERP responses to the target words revealed a priming effect on the N170 and an interaction on late positive potential (LPP) responses, and congruently with the behavioral results, suggested that, compared to closed eyes, direct gaze was affectively more congruent with positive words and more incongruent with negative words. The priming effect on the N170 response indicated that gaze stimuli influenced the subsequent affective word processing at an early stage of information processing. In conclusion, the present behavioral and electrophysiological evidence suggests that direct gaze automatically activates more positive affective reactions than closed eyes. PMID:28003803

  16. Owners' direct gazes increase dogs' attention-getting behaviors.

    PubMed

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs.

  17. Audience gaze while appreciating a multipart musical performance.

    PubMed

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts.

  18. Multigrid calculations of 3-D turbulent viscous flows

    NASA Technical Reports Server (NTRS)

    Yokota, Jeffrey W.

    1989-01-01

    Convergence properties of a multigrid algorithm, developed to calculate compressible viscous flows, are analyzed by a vector sequence eigenvalue estimate. The full 3-D Reynolds-averaged Navier-Stokes equations are integrated by an implicit multigrid scheme while a k-epsilon turbulence model is solved, uncoupled from the flow equations. Estimates of the eigenvalue structure for both single and multigrid calculations are compared in an attempt to analyze the process as well as the results of the multigrid technique. The flow through an annular turbine is used to illustrate the scheme's ability to calculate complex 3-D flows.

  19. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  20. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  1. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  2. Mini 3D for shallow gas reconnaissance

    SciTech Connect

    Vallieres, T. des; Enns, D.; Kuehn, H.; Parron, D.; Lafet, Y.; Van Hulle, D.

    1996-12-31

    The Mini 3D project was undertaken by TOTAL and ELF with the support of CEPM (Comite d`Etudes Petrolieres et Marines) to define an economical method of obtaining 3D seismic HR data for shallow gas assessment. An experimental 3D survey was carried out with classical site survey techniques in the North Sea. From these data 19 simulations, were produced to compare different acquisition geometries ranging from dual, 600 m long cables to a single receiver. Results show that short offset, low fold and very simple streamer positioning are sufficient to give a reliable 3D image of gas charged bodies. The 3D data allow a much more accurate risk delineation than 2D HR data. Moreover on financial grounds Mini-3D is comparable in cost to a classical HR 2D survey. In view of these results, such HR 3D should now be the standard for shallow gas surveying.

  3. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  4. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  5. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  6. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  7. 3D change detection - Approaches and applications

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter

    2016-12-01

    Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.

  8. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  9. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  10. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  11. Patient specific 3D printed phantom for IMRT quality assurance

    NASA Astrophysics Data System (ADS)

    Ehler, Eric D.; Barney, Brett M.; Higgins, Patrick D.; Dusenbery, Kathryn E.

    2014-10-01

    The purpose of this study was to test the feasibility of a patient specific phantom for patient specific dosimetric verification. Using the head and neck region of an anthropomorphic phantom as a substitute for an actual patient, a soft-tissue equivalent model was constructed with the use of a 3D printer. Calculated and measured dose in the anthropomorphic phantom and the 3D printed phantom was compared for a parallel-opposed head and neck field geometry to establish tissue equivalence. A nine-field IMRT plan was constructed and dose verification measurements were performed for the 3D printed phantom as well as traditional standard phantoms. The maximum difference in calculated dose was 1.8% for the parallel-opposed configuration. Passing rates of various dosimetric parameters were compared for the IMRT plan measurements; the 3D printed phantom results showed greater disagreement at superficial depths than other methods. A custom phantom was created using a 3D printer. It was determined that the use of patient specific phantoms to perform dosimetric verification and estimate the dose in the patient is feasible. In addition, end-to-end testing on a per-patient basis was possible with the 3D printed phantom. Further refinement of the phantom construction process is needed for routine use.

  12. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  13. Patient specific 3D printed phantom for IMRT quality assurance.

    PubMed

    Ehler, Eric D; Barney, Brett M; Higgins, Patrick D; Dusenbery, Kathryn E

    2014-10-07

    The purpose of this study was to test the feasibility of a patient specific phantom for patient specific dosimetric verification.Using the head and neck region of an anthropomorphic phantom as a substitute for an actual patient, a soft-tissue equivalent model was constructed with the use of a 3D printer. Calculated and measured dose in the anthropomorphic phantom and the 3D printed phantom was compared for a parallel-opposed head and neck field geometry to establish tissue equivalence. A nine-field IMRT plan was constructed and dose verification measurements were performed for the 3D printed phantom as well as traditional standard phantoms.The maximum difference in calculated dose was 1.8% for the parallel-opposed configuration. Passing rates of various dosimetric parameters were compared for the IMRT plan measurements; the 3D printed phantom results showed greater disagreement at superficial depths than other methods.A custom phantom was created using a 3D printer. It was determined that the use of patient specific phantoms to perform dosimetric verification and estimate the dose in the patient is feasible. In addition, end-to-end testing on a per-patient basis was possible with the 3D printed phantom. Further refinement of the phantom construction process is needed for routine use.

  14. 3D kinematics using dual quaternions: theory and applications in neuroscience

    PubMed Central

    Leclercq, Guillaume; Lefèvre, Philippe; Blohm, Gunnar

    2013-01-01

    In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications. PMID:23443667

  15. Working memory load disrupts gaze-cued orienting of attention

    PubMed Central

    Bobak, Anna K.; Langton, Stephen R. H.

    2015-01-01

    A large body of work has shown that a perceived gaze shift produces a shift in a viewer’s spatial attention in the direction of the seen gaze. A controversial issue surrounds the extent to which this gaze-cued orienting effect is stimulus-driven, or is under a degree of top-down control. In two experiments we show that the gaze-cued orienting effect is disrupted by a concurrent task that has been shown to place high demands on executive resources: random number generation (RNG). In Experiment 1 participants were faster to locate targets that appeared in gaze-cued locations relative to targets that appeared in locations opposite to those indicated by the gaze shifts, while simultaneously and continuously reciting aloud the digits 1–9 in order; however, this gaze-cueing effect was eliminated when participants continuously recited the same digits in a random order. RNG was also found to interfere with gaze-cued orienting in Experiment 2 where participants performed a speeded letter identification response. Together, these data suggest that gaze-cued orienting is actually under top-down control. We argue that top-down signals sustain a goal to shift attention in response to gazes, such that orienting ordinarily occurs when they are perceived; however, the goal cannot always be maintained when concurrent, multiple, competing goals are simultaneously active in working memory. PMID:26379587

  16. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  17. 3D measurement for rapid prototyping

    NASA Astrophysics Data System (ADS)

    Albrecht, Peter; Lilienblum, Tilo; Sommerkorn, Gerd; Michaelis, Bernd

    1996-08-01

    Optical 3-D measurement is an interesting approach for rapid prototyping. On one hand it's necessary to get the 3-D data of an object and on the other hand it's necessary to check the manufactured object (quality checking). Optical 3-D measurement can realize both. Classical 3-D measurement procedures based on photogrammetry cause systematic errors at strongly curved surfaces or steps in surfaces. One possibility to reduce these errors is to calculate the 3-D coordinates from several successively taken images. Thus it's possible to get higher spatial resolution and to reduce the systematic errors at 'problem surfaces.' Another possibility is to process the measurement values by neural networks. A modified associative memory smoothes and corrects the calculated 3-D coordinates using a-priori knowledge about the measurement object.

  18. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    , -y, -z coordinates, down-hole depth, and stratigraphic information are available. 4) We grouped stratigraphic units into four main layers based on analysis of geological settings of the modeling area. The stratigraphic units extend from Quaternary, Cretaceous, Carboniferous to Devonian. In order to facilitate the determination of each unit boundaries, a series of standard code was used to integrate data with different descriptive attributes. 5) The Quaternary and Cretaceous units are characterized by subhorizontal layers. Kriging interpolation was processed to the borehole data in order to estimate data distribution and surface relief for the layers. 6) The Carboniferous and Devonian units are folded. The lack of software support, concerning simulating folds and the shallow depth of boreholes and cross sections constrained the determination of geological boundaries. A strategy of digitalizing the fold surfaces from cross sections and establishing them as inclined strata was followed. The modeling was simply subdivided into two steps. The first step consisted of importing data into the modeling software. The second step involved the construction of subhorizontal layers and folds, which were constrained by geological maps, cross sections and outcrops. The construction of the 3D stratigraphic model is of high relevance to further simulation and application, such as 1) lithological modeling; 2) answering simple questions such as "At which unit is the water table?" and calculating volume of groundwater storage during assessment of aquifer vulnerability to contamination; and 3) assigned by geotechnical properties in grids and providing them for user required application. Acknowledgements: Borehole data is kindly provided by the Municipality of Aachen. References: 1. Janet T. Watt, Jonathan M.G. Glen, David A. John and David A. Ponce (2007) Three-dimensional geologic model of the northern Nevada rift and the Beowawe geothermal system, north-central Nevada. Geosphere, v. 3

  19. Semiautomatic approaches to account for 3-D distortion of the electric field from local, near-surface structures in 3-D resistivity inversions of 3-D regional magnetotelluric data

    USGS Publications Warehouse

    Rodriguez, Brian D.

    2017-03-31

    This report summarizes the results of three-dimensional (3-D) resistivity inversion simulations that were performed to account for local 3-D distortion of the electric field in the presence of 3-D regional structure, without any a priori information on the actual 3-D distribution of the known subsurface geology. The methodology used a 3-D geologic model to create a 3-D resistivity forward (“known”) model that depicted the subsurface resistivity structure expected for the input geologic configuration. The calculated magnetotelluric response of the modeled resistivity structure was assumed to represent observed magnetotelluric data and was subsequently used as input into a 3-D resistivity inverse model that used an iterative 3-D algorithm to estimate 3-D distortions without any a priori geologic information. A publicly available inversion code, WSINV3DMT, was used for all of the simulated inversions, initially using the default parameters, and subsequently using adjusted inversion parameters. A semiautomatic approach of accounting for the static shift using various selections of the highest frequencies and initial models was also tested. The resulting 3-D resistivity inversion simulation was compared to the “known” model and the results evaluated. The inversion approach that produced the lowest misfit to the various local 3-D distortions was an inversion that employed an initial model volume resistivity that was nearest to the maximum resistivities in the near-surface layer.

  20. Photorefractive Polymers for Updateable 3D Displays

    DTIC Science & Technology

    2010-02-24

    Final Performance Report 3. DATES COVERED (From - To) 01-01-2007 to 11-30-2009 4. TITLE AND SUBTITLE Photorefractive Polymers for Updateable 3D ...ABSTRACT During the tenure of this project a large area updateable 3D color display has been developed for the first time using a new co-polymer...photorefractive polymers have been demonstrated. Moreover, a 6 inch × 6 inch sample was fabricated demonstrating the feasibility of making large area 3D

  1. 3D Microperfusion Model of ADPKD

    DTIC Science & Technology

    2015-10-01

    Stratasys 3D printer . PDMS was cast in the negative molds in order to create permanent biocompatible plastic masters (SmoothCast 310). All goals of task...1 AWARD NUMBER: W81XWH-14-1-0304 TITLE: 3D Microperfusion Model of ADPKD PRINCIPAL INVESTIGATOR: David L. Kaplan CONTRACTING ORGANIZATION...ADDRESS. 1. REPORT DATE October 2015 2. REPORT TYPE Annual Report 3. DATES COVERED 15 Sep 2014 - 14 Sep 2015 4. TITLE AND SUBTITLE 3D

  2. 3D carotid plaque MR Imaging

    PubMed Central

    Parker, Dennis L.

    2015-01-01

    SYNOPSIS There has been significant progress made in 3D carotid plaque magnetic resonance imaging techniques in recent years. 3D plaque imaging clearly represents the future in clinical use. With effective flow suppression techniques, choices of different contrast weighting acquisitions, and time-efficient imaging approaches, 3D plaque imaging offers flexible imaging plane and view angle analysis, large coverage, multi-vascular beds capability, and even can be used in fast screening. PMID:26610656

  3. 3-D Extensions for Trustworthy Systems

    DTIC Science & Technology

    2011-01-01

    3- D Extensions for Trustworthy Systems (Invited Paper) Ted Huffmire∗, Timothy Levin∗, Cynthia Irvine∗, Ryan Kastner† and Timothy Sherwood...address these problems, we propose an approach to trustworthy system development based on 3- D integration, an emerging chip fabrication technique in...which two or more integrated circuit dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3- D

  4. Hardware Trust Implications of 3-D Integration

    DTIC Science & Technology

    2010-12-01

    enhancing a commod- ity processor with a variety of security functions. This paper examines the 3-D design approach and provides an analysis concluding...of key components. The question addressed by this paper is, “Can a 3-D control plane provide useful secure services when it is conjoined with an...untrust- worthy computation plane?” Design-level investigation of this question yields a definite yes. This paper explores 3- D applications and their

  5. Digital holography and 3-D imaging.

    PubMed

    Banerjee, Partha; Barbastathis, George; Kim, Myung; Kukhtarev, Nickolai

    2011-03-01

    This feature issue on Digital Holography and 3-D Imaging comprises 15 papers on digital holographic techniques and applications, computer-generated holography and encryption techniques, and 3-D display. It is hoped that future work in the area leads to innovative applications of digital holography and 3-D imaging to biology and sensing, and to the development of novel nonlinear dynamic digital holographic techniques.

  6. Gis-Based Smart Cartography Using 3d Modeling

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Tassetti, A. N.

    2013-08-01

    3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.

  7. On the efficiency of the hybrid and the exact second-order sampling formulations of the EnKF: a reality-inspired 3-D test case for estimating biodegradation rates of chlorinated hydrocarbons at the port of Rotterdam

    NASA Astrophysics Data System (ADS)

    Gharamti, Mohamad E.; Valstar, Johan; Janssen, Gijs; Marsman, Annemieke; Hoteit, Ibrahim

    2016-11-01

    This study considers the assimilation problem of subsurface contaminants at the port of Rotterdam in the Netherlands. It involves the estimation of solute concentrations and biodegradation rates of four different chlorinated solvents. We focus on assessing the efficiency of an adaptive hybrid ensemble Kalman filter and optimal interpolation (EnKF-OI) and the exact second-order sampling formulation (EnKFESOS) for mitigating the undersampling of the estimation and observation errors covariances, respectively. A multi-dimensional and multi-species reactive transport model is coupled to simulate the migration of contaminants within a Pleistocene aquifer layer located around 25 m below mean sea level. The biodegradation chain of chlorinated hydrocarbons starting from tetrachloroethene and ending with vinyl chloride is modeled under anaerobic environmental conditions for 5 decades. Yearly pseudo-concentration data are used to condition the forecast concentration and degradation rates in the presence of model and observational errors. Assimilation results demonstrate the robustness of the hybrid EnKF-OI, for accurately calibrating the uncertain biodegradation rates. When implemented serially, the adaptive hybrid EnKF-OI scheme efficiently adjusts the weights of the involved covariances for each individual measurement. The EnKFESOS is shown to maintain the parameter ensemble spread much better leading to more robust estimates of the states and parameters. On average, a well tuned hybrid EnKF-OI and the EnKFESOS respectively suggest around 48 and 21 % improved concentration estimates, as well as around 70 and 23 % improved anaerobic degradation rates, over the standard EnKF. Incorporating large uncertainties in the flow model degrades the accuracy of the estimates of all schemes. Given that the performance of the hybrid EnKF-OI depends on the quality of the background statistics, satisfactory results were obtained only when the uncertainty imposed on the background

  8. 3D tongue motion from tagged and cine MR images.

    PubMed

    Xing, Fangxu; Woo, Jonghye; Murano, Emi Z; Lee, Junghoon; Stone, Maureen; Prince, Jerry L

    2013-01-01

    Understanding the deformation of the tongue during human speech is important for head and neck surgeons and speech and language scientists. Tagged magnetic resonance (MR) imaging can be used to image 2D motion, and data from multiple image planes can be combined via post-processing to yield estimates of 3D motion. However, lacking boundary information, this approach suffers from inaccurate estimates near the tongue surface. This paper describes a method that combines two sources of information to yield improved estimation of 3D tongue motion. The method uses the harmonic phase (HARP) algorithm to extract motion from tags and diffeomorphic demons to provide surface deformation. It then uses an incompressible deformation estimation algorithm to incorporate both sources of displacement information to form an estimate of the 3D whole tongue motion. Experimental results show that use of combined information improves motion estimation near the tongue surface, a problem that has previously been reported as problematic in HARP analysis, while preserving accurate internal motion estimates. Results on both normal and abnormal tongue motions are shown.

  9. Computerized 3-D reconstruction of complicated anatomical structure

    NASA Astrophysics Data System (ADS)