Science.gov

Sample records for 3d point spread

  1. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  2. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  3. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  4. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  5. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    PubMed

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  6. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET

    NASA Astrophysics Data System (ADS)

    Rapisarda, E.; Bettinardi, V.; Thielemans, K.; Gilardi, M. C.

    2010-07-01

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring 22Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the 22Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  7. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  8. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  9. Alignment of continuous video onto 3D point clouds.

    PubMed

    Zhao, Wenyi; Nister, David; Hsu, Steve

    2005-08-01

    We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semiurban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach.

  10. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  11. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  12. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  13. 3D Building Reconstruction Using Dense Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.

    2016-06-01

    Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.

  14. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  15. The medial scaffold of 3D unorganized point clouds.

    PubMed

    Leymarie, Frederic F; Kimia, Benjamin B

    2007-02-01

    We introduce the notion of the medial scaffold, a hierarchical organization of the medial axis of a 3D shape in the form of a graph constructed from special medial curves connecting special medial points. A key advantage of the scaffold is that it captures the qualitative aspects of shape in a hierarchical and tightly condensed representation. We propose an efficient and exact method for computing the medial scaffold based on a notion of propagation along the scaffold itself, starting from initial sources of the flow and constructing the scaffold during the propagation. We examine this method specifically in the context of an unorganized cloud of points in 3D, e.g., as obtained from laser range finders, which typically involve hundreds of thousands of points, but the ideas are generalizable to data arising from geometrically described surface patches. The computational bottleneck in the propagation-based scheme is in finding the initial sources of the flow. We thus present several ideas to avoid the unnecessary consideration of pairs of points which cannot possibly form a medial point source, such as the "visibility" of a point from another given a third point and the interaction of clusters of points. An application of using the medial scaffold for the representation of point samplings of real-life objects is also illustrated.

  16. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  17. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  18. Laminar cortical dynamics of 3D surface perception: stratification, transparency, and neon color spreading.

    PubMed

    Grossberg, Stephen; Yazdanbakhsh, Arash

    2005-06-01

    The 3D LAMINART neural model is developed to explain how the visual cortex gives rise to 3D percepts of stratification, transparency, and neon color spreading in response to 2D pictures and 3D scenes. Such percepts are sensitive to whether contiguous image regions have the same contrast polarity and ocularity. The model predicts how like-polarity competition at V1 simple cells in layer 4 may cause these percepts when it interacts with other boundary and surface processes in V1, V2, and V4. The model also explains how: the Metelli Rules cause transparent percepts, bistable transparency percepts arise, and attention influences transparency reversal.

  19. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  20. Comparison of 3D interest point detectors and descriptors for point cloud fusion

    NASA Astrophysics Data System (ADS)

    Hänsch, R.; Weber, T.; Hellwich, O.

    2014-08-01

    The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.

  1. Particle Acceleration at Reconnecting 3D Null Points

    NASA Astrophysics Data System (ADS)

    Stanier, A.; Browning, P.; Gordovskyy, M.; Dalla, S.

    2012-12-01

    Hard X-ray observations from the RHESSI spacecraft indicate that a significant fraction of solar flare energy release is in non-thermal energetic particles. A plausible acceleration mechanism for these are the strong electric fields associated with reconnection, a process that can be particularly efficient when particles become unmagnetised near to null points. This mechanism has been well studied in 2D, at X-points within reconnecting current sheets; however, 3D reconnection models show significant qualitative differences and it is not known whether these new models are efficient for particle acceleration. We place test particles in analytic model fields (eg. Craig and Fabling 1996) and numerical solutions to the the resistive magnetohydrodynamic (MHD) equations near reconnecting 3D nulls. We compare the behaviour of these test particles with previous results for test particle acceleration in ideal MHD models (Dalla and Browning 2005). We find that the fan model is very efficient due to an increasing "guide field" that stabilises particles against ejection from the current sheet. However, the spine model, which was the most promising in the ideal case, gives weak acceleration as the reconnection electric field is localised to a narrow cylinder about the spine axis.

  2. PEG-diacrylate/hyaluronic acid semi-interpenetrating network compositions for 3D cell spreading and migration

    PubMed Central

    Lee, Ho-Joon; Sen, Atanu; Bae, Sooneon; Lee, Jeoung Soo; Webb, Ken

    2015-01-01

    To serve as artificial matrices for therapeutic cell transplantation, synthetic hydrogels must incorporate mechanisms enabling localized, cell-mediated degradation that allows cell spreading and migration. Previously, we have shown that hybrid semi-interpenetrating polymer networks (semi-IPNs) composed of hydrolytically degradable PEG-diacrylates (PEGdA), acrylate-PEG-GRGDS, and native hyaluronic acid (HA) support increased cell spreading relative to fully synthetic networks that is dependent on cellular hyaluronidase activity. This study systematically investigated the effects of PEGdA/HA semi-IPN network composition on 3D spreading of encapsulated fibroblasts, the underlying changes in gel structure responsible for this activity, and the ability of optimized gel formulations to support long-term cell survival and migration. Fibroblast spreading exhibited a biphasic response to HA concentration, required a minimum HA molecular weight, decreased with increasing PEGdA concentration, and was independent of hydrolytic degradation at early time points. Increased gel turbidity was observed in semi-IPNs, but not in copolymerized hydrogels containing methacrylated HA that did not support cell spreading; suggesting an underlying mechanism of polymerization-induced phase separation resulting in HA-enriched defects within the network structure. PEGdA/HA semi-IPNs were also able to support cell spreading at relatively high levels of mechanical properties (~10 kPa elastic modulus) compared to alternative hybrid hydrogels. In order to support long-term cellular remodeling, the degradation rate of the PEGdA component was optimized by preparing blends of three different PEGdA macromers with varying susceptibility to hydrolytic degradation. Optimized semi-IPN formulations supported long-term survival of encapsulated fibroblasts and sustained migration in a gel-within-gel encapsulation model. These results demonstrate that PEGdA/HA semi-IPNs provide dynamic microenvironments that

  3. Continental rifting to seafloor spreading: 2D and 3D numerical modeling

    NASA Astrophysics Data System (ADS)

    Liao, Jie; Gerya, Taras

    2014-05-01

    Two topics related with continental extension is studied by using numerical modeling methods: (1) Lithospheric mantle stratification changes dynamics of craton extension (2D modeling) and (2) Initial lithospheric rheological structure influences the incipient geometry of the seafloor spreading (3D modeling). (Topic 1) Lithospheric mantle stratification is a common feature in cratonic areas which has been demonstrated by geophysical and geochemical studies. The influence of lithospheric mantle stratification during craton evolution remains poorly understood. We use a 2D thermo-mechanical coupled numerical model to study the influence of stratified lithospheric mantle on craton extension. A rheologically weak layer representing hydrated and/or metasomatized composition is implemented in the lithospheric mantle. Our results show that the weak mantle layer changes the dynamics of lithospheric extension by enhancing the deformation of the overlying mantle and crust and inhibiting deformation of the underlying mantle. Modeling results are compared with North China and North Atlantic cratons. Our work indicates that although the presence of a weak layer may not be sufficient to initiate craton deformation, it enhances deformation by lowering the required extensional plate boundary force. (Topic 2) The process from continental rifting to seafloor spreading is an important step in the Wilson Cycle. Since the rifting to spreading is a continuous process, understanding the inheritance of continental rifting in seafloor spreading is crucial to study the incipient geometry (on a map view) of the oceanic ridge and remains a big challenge. Large extension strain is required to simulate the rifting and spreading processes. Oceanic ridge has a 3D geometry on a map view in nature, which requires 3D studies. Therefore, we employ the three-dimensional numerical modeling method to study this problem. The initial lithospheric rheological structure and the perturbation geometry are two

  4. Individual versus Collective Fibroblast Spreading and Migration: Regulation by Matrix Composition in 3-D Culture

    PubMed Central

    Miron-Mendoza, Miguel; Lin, Xihui; Ma, Lisha; Ririe, Peter; Petroll, W. Matthew

    2012-01-01

    Extracellular matrix (ECM) supplies both physical and chemical signals to cells and provides a substrate through which fibroblasts migrate during wound repair. To directly assess how ECM composition regulates this process, we used a nested 3D matrix model in which cell-populated collagen buttons were embedded in cell-free collagen or fibrin matrices. Time-lapse microscopy was used to record the dynamic pattern of cell migration into the outer matrices, and 3-D confocal imaging was used to assess cell connectivity and cytoskeletal organization. Corneal fibroblasts stimulated with PDGF migrated more rapidly into collagen as compared to fibrin. In addition, the pattern of fibroblast migration into fibrin and collagen ECMs was strikingly different. Corneal fibroblasts migrating into collagen matrices developed dendritic processes and moved independently, whereas cells migrating into fibrin matrices had a more fusiform morphology and formed an interconnected meshwork. A similar pattern was observed when using dermal fibroblasts, suggesting that this response in not unique to corneal cells. We next cultured corneal fibroblasts within and on top of standard collagen and fibrin matrices to assess the impact of ECM composition on the cell spreading response. Similar differences in cell morphology and connectivity were observed – cells remained separated on collagen but coalesced into clusters on fibrin. Cadherin was localized to junctions between interconnected cells, whereas fibronectin was present both between cells and at the tips of extending cell processes. Cells on fibrin matrices also developed more prominent stress fibers than those on collagen matrices. Importantly, these spreading and migration patterns were consistently observed on both rigid and compliant substrates, thus differences in ECM mechanical stiffness were not the underlying cause. Overall, these results demonstrate for the first time that ECM protein composition alone (collagen vs. fibrin) can

  5. Dynamic Assessment of Fibroblast Mechanical Activity during Rac-induced Cell Spreading in 3-D Culture

    PubMed Central

    Petroll, W. Matthew; Ma, Lisha; Kim, Areum; Ly, Linda; Vishwanath, Mridula

    2009-01-01

    The goal of this study was to determine the morphological and sub-cellular mechanical effects of Rac activation on fibroblasts within 3-D collagen matrices. Corneal fibroblasts were plated at low density inside 100 μm thick fibrillar collagen matrices and cultured for 1 to 2 days in serum-free media. Time-lapse imaging was then performed using Nomarski DIC. After an acclimation period, perfusion was switched to media containing PDGF. In some experiments, Y-27632 or blebbistatin were used to inhibit Rho-kinase (ROCK) or myosin II, respectively. PDGF activated Rac and induced cell spreading, which resulted in an increase in cell length, cell area, and the number of pseudopodial processes. Tractional forces were generated by extending pseudopodia, as indicated by centripetal displacement and realignment of collagen fibrils. Interestingly, the pattern of pseudopodial extension and local collagen fibril realignment was highly dependent upon the initial orientation of fibrils at the leading edge. Following ROCK or myosin II inhibition, significant ECM relaxation was observed, but small displacements of collagen fibrils continued to be detected at the tips of pseudopodia. Taken together, the data suggests that during Rac-induced cell spreading within 3-D matrices, there is a shift in the distribution of forces from the center to the periphery of corneal fibroblasts. ROCK mediates the generation of large myosin II-based tractional forces during cell spreading within 3-D collagen matrices, however residual forces can be generated at the tips of extending pseudopodia that are both ROCK and myosin II-independent. PMID:18452153

  6. Sensitivity of power and RMS delay spread predictions of a 3D indoor ray tracing model.

    PubMed

    Liu, Zhong-Yu; Guo, Li-Xin; Li, Chang-Long; Wang, Qiang; Zhao, Zhen-Wei

    2016-06-13

    This study investigates the sensitivity of a three-dimensional (3D) indoor ray tracing (RT) model for the use of the uniform theory of diffraction and geometrical optics in radio channel characterizations of indoor environments. Under complex indoor environments, RT-based predictions require detailed and accurate databases of indoor object layouts and the electrical characteristics of such environments. The aim of this study is to assist in selecting the appropriate level of accuracy required in indoor databases to achieve good trade-offs between database costs and prediction accuracy. This study focuses on the effects of errors in indoor environments on prediction results. In studying the effects of inaccuracies in geometry information (indoor object layout) on power coverage prediction, two types of artificial erroneous indoor maps are used. Moreover, a systematic analysis is performed by comparing the predictions with erroneous indoor maps and those with the original indoor map. Subsequently, the influence of random errors on RMS delay spread results is investigated. Given the effect of electrical parameters on the accuracy of the predicted results of the 3D RT model, the relative permittivity and conductivity of different fractions of an indoor environment are set with different values. Five types of computer simulations are considered, and for each type, the received power and RMS delay spread under the same circumstances are simulated with the RT model.

  7. Sensitivity of power and RMS delay spread predictions of a 3D indoor ray tracing model.

    PubMed

    Liu, Zhong-Yu; Guo, Li-Xin; Li, Chang-Long; Wang, Qiang; Zhao, Zhen-Wei

    2016-06-13

    This study investigates the sensitivity of a three-dimensional (3D) indoor ray tracing (RT) model for the use of the uniform theory of diffraction and geometrical optics in radio channel characterizations of indoor environments. Under complex indoor environments, RT-based predictions require detailed and accurate databases of indoor object layouts and the electrical characteristics of such environments. The aim of this study is to assist in selecting the appropriate level of accuracy required in indoor databases to achieve good trade-offs between database costs and prediction accuracy. This study focuses on the effects of errors in indoor environments on prediction results. In studying the effects of inaccuracies in geometry information (indoor object layout) on power coverage prediction, two types of artificial erroneous indoor maps are used. Moreover, a systematic analysis is performed by comparing the predictions with erroneous indoor maps and those with the original indoor map. Subsequently, the influence of random errors on RMS delay spread results is investigated. Given the effect of electrical parameters on the accuracy of the predicted results of the 3D RT model, the relative permittivity and conductivity of different fractions of an indoor environment are set with different values. Five types of computer simulations are considered, and for each type, the received power and RMS delay spread under the same circumstances are simulated with the RT model. PMID:27410335

  8. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  9. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  10. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  11. Unlocking the scientific potential of complex 3D point cloud dataset : new classification and 3D comparison methods

    NASA Astrophysics Data System (ADS)

    Lague, D.; Brodu, N.; Leroux, J.

    2012-12-01

    Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi

  12. Off- and Along-Axis Slow Spreading Ridge Segment Characters: Insights From 3d Thermal Modeling

    NASA Astrophysics Data System (ADS)

    Gac, S.; Tisseau, C.; Dyment, J.

    2001-12-01

    Many observations along the Mid-Atlantic Ridge segments suggest a correlation between surface characters (length, axial morphology) and the thermal state of the segment. Thibaud et al. (1998) classify segments according to their thermal state: "colder" segments shorter than 30 km show a weak magmatic activity, and "hotter" segments as long as 90 km show a robust magmatic activity. The existence of such a correlation suggests that the thermal structure of a slow spreading ridge segment explains most of the surface observations. Here we test the physical coherence of such an integrated thermal model and evaluate it quantitatively. The different kinds of segment would constitute different phases in a segment evolution, the segment evolving progressively from a "colder" to a "hotter" so to a "colder" state. Here we test the consistency of such an evolution scheme. To test these hypotheses we have developed a 3D numerical model for the thermal structure and evolution of a slow spreading ridge segment. The thermal structure is controlled by the geometry and the dimensions of a permanently hot zone, imposed beneath the segment center, where is simulated the adiabatic ascent of magmatic material. To compare the model with the observations several geophysic quantities which depend on the thermal state are simulated: crustal thickness variations along axis, gravity anomalies (reflecting density variations) and earthquake maximum depth (corresponding to the 750° C isotherm depth). The thermal structure of a particular segment is constrained by comparing the simulated quantities to the real ones. Considering realistic magnetization parameters, the magnetic anomalies generated from the same thermal structure and evolution reproduce the observed magnetic anomaly amplitude variations along the segment. The thermal structures accounting for observations are determined for each kind of segment (from "colder" to "hotter"). The evolution of the thermal structure from the "colder" to

  13. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    NASA Astrophysics Data System (ADS)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  14. Point spread function engineering with multiphoton SPIFI

    NASA Astrophysics Data System (ADS)

    Wernsing, Keith A.; Field, Jeffrey J.; Domingue, Scott R.; Allende-Motz, Alyssa M.; DeLuca, Keith F.; Levi, Dean H.; DeLuca, Jennifer G.; Young, Michael D.; Squier, Jeff A.; Bartels, Randy A.

    2016-03-01

    MultiPhoton SPatIal Frequency modulated Imaging (MP-SPIFI) has recently demonstrated the ability to simultaneously obtain super-resolved images in both coherent and incoherent scattering processes -- namely, second harmonic generation and two-photon fluorescence, respectively.1 In our previous analysis, we considered image formation produced by the zero and first diffracted orders from the SPIFI modulator. However, the modulator is a binary amplitude mask, and therefore produces multiple diffracted orders. In this work, we extend our analysis to image formation in the presence of higher diffracted orders. We find that tuning the mask duty cycle offers a measure of control over the shape of super-resolved point spread functions in an MP-SPIFI microscope.

  15. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  16. Fast Probabilistic Fusion of 3d Point Clouds via Occupancy Grids for Scene Classification

    NASA Astrophysics Data System (ADS)

    Kuhn, Andreas; Huang, Hai; Drauschke, Martin; Mayer, Helmut

    2016-06-01

    High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.

  17. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  18. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  19. Deep Herschel PACS point spread functions

    NASA Astrophysics Data System (ADS)

    Bocchio, M.; Bianchi, S.; Abergel, A.

    2016-06-01

    The knowledge of the point spread function (PSF) of imaging instruments represents a fundamental requirement for astronomical observations. The Herschel PACS PSFs delivered by the instrument control centre are obtained from observations of the Vesta asteroid, which provides a characterisation of the central part and, therefore, excludes fainter features. In many cases, however, information on both the core and wings of the PSFs is needed. With this aim, we combine Vesta and Mars dedicated observations and obtain PACS PSFs with an unprecedented dynamic range (~106) at slow and fast scan speeds for the three photometric bands. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.FITS files of our PACS PSFs (Fig. 2) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A117

  20. 3D campus modeling using LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya

    2012-10-01

    The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.

  1. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  2. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  3. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  4. Melting points and chemical bonding properties of 3d transition metal elements

    NASA Astrophysics Data System (ADS)

    Takahara, Wataru

    2014-08-01

    The melting points of 3d transition metal elements show an unusual local minimal peak at manganese across Period 4 in the periodic table. The chemical bonding properties of scandium, titanium, vanadium, chromium, manganese, iron, cobalt, nickel and copper are investigated by the DV-Xα cluster method. The melting points are found to correlate with the bond overlap populations. The chemical bonding nature therefore appears to be the primary factor governing the melting points.

  5. 3-D Printers Spread from Engineering Departments to Designs across Disciplines

    ERIC Educational Resources Information Center

    Chen, Angela

    2012-01-01

    The ability to print a 3-D object may sound like science fiction, but it has been around in some form since the 1980s. Also called rapid prototyping or additive manufacturing, the idea is to take a design from a computer file and forge it into an object, often in flat cross-sections that can be assembled into a larger whole. While the printer on…

  6. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  7. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  8. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way.

  9. Dense point-cloud creation using superresolution for a monocular 3D reconstruction system

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-05-01

    We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.

  10. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    NASA Astrophysics Data System (ADS)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  11. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  12. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar

  13. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  14. Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors

    PubMed Central

    Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko

    2012-01-01

    Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513

  15. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  16. SDTP: a robust method for interest point detection on 3D range images

    NASA Astrophysics Data System (ADS)

    Wang, Shandong; Gong, Lujin; Zhang, Hui; Zhang, Yongjie; Ren, Haibing; Rhee, Seon-Min; Lee, Hyong-Euk

    2013-12-01

    In fields of intelligent robots and computer vision, the capability to select a few points representing salient structures has always been focused and investigated. In this paper, we present a novel interest point detector for 3D range images, which can be used with good results in applications of surface registration and object recognition. A local shape description around each point in the range image is firstly constructed based on the distribution map of the signed distances to the tangent plane in its local support region. Using this shape description, the interest value is computed for indicating the probability of a point being the interest point. Lastly a Non-Maxima Suppression procedure is performed to select stable interest points on positions that have large surface variation in the vicinity. Our method is robust to noise, occlusion and clutter, which can be seen from the higher repeatability values compared with the state-of-the-art 3D interest point detectors in experiments. In addition, the method can be implemented easily and requires low computation time.

  17. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  18. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  19. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  20. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  1. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a

  2. Phase-Scrambler Plate Spreads Point Image

    NASA Technical Reports Server (NTRS)

    Edwards, Oliver J.; Arild, Tor

    1992-01-01

    Array of small prisms retrofit to imaging lens. Phase-scrambler plate essentially planar array of small prisms partitioning aperture of lens into many subapertures, and prism at each subaperture designed to divert relatively large diffraction spot formed by that subaperture to different, specific point on focal plane.

  3. Reconstructing 3D coastal cliffs from airborne oblique photographs without ground control points

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.

    2014-05-01

    Coastal cliff collapse hazard assessment requires measuring cliff face topography at regular intervals. Terrestrial laser scanner techniques have proven useful so far but are expensive to use either through purchasing the equipment or through survey subcontracting. In addition, terrestrial laser surveys take time which is sometimes incompatible with the time during with the beach is accessible at low-tide. By comparison, structure from motion techniques (SFM) are much less costly to implement, and if airborne, acquisition of several kilometers of coastline can be done in a matter of minutes. In this paper, the potential of GPS-tagged oblique airborne photographs and SFM techniques is examined to reconstruct chalk cliff dense 3D point clouds without Ground Control Points (GCP). The focus is put on comparing the relative 3D point of views reconstructed by Visual SFM with their synchronous Solmeta Geotagger Pro2 GPS locations using robust estimators. With a set of 568 oblique photos, shot from the open door of an airplane with a triplet of synchronized Nikon D7000, GPS and SFM-determined view point coordinates converge to X: ±31.5 m; Y: ±39.7 m; Z: ±13.0 m (LE66). Uncertainty in GPS position affects the model scale, angular attitude of the reference frame (the shoreline ends up tilted by 2°) and absolute positioning. Ground Control Points cannot be avoided to orient such models.

  4. Biview learning for human posture segmentation from 3D points cloud.

    PubMed

    Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng

    2014-01-01

    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation.

  5. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  6. Methods for obtaining 3D training images for multiple-point statistics simulations: a comparative study

    NASA Astrophysics Data System (ADS)

    Jha, S. K.; Comunian, A.; Mariethoz, G.; Kelly, B. F.

    2013-12-01

    In recent years, multiple-point statistics (MPS) has been used in several studies for characterizing facies heterogeneity in geological formations. MPS uses a conceptual representation of the expected facies distribution, called a Training image (TI), to generate patterns of facies heterogeneity. In two-dimensional (2D) simulations the TI can be a hand-drawn image, an analogue outcrop image, or derived from geological reconstructions using a combination of geological analogues and geophysical data. However, obtaining suitable TI in three-dimensions (3D) from geological analogues or geophysical data is harder and has limited the use of MPS for simulating facies heterogeneity in 3D. There have been attempts to generate 3D training images using object-based simulation (OBS). However, determining suitable values for the large number of parameters required by OBS is often challenging. In this study, we compare two approaches for generating three-dimensional training images to model a valley filling sequence deposited by meandering rivers. The first approach is based on deriving statistical information from two-dimensional TIs. The 3D domain is simulated with a sequence of 2D MPS simulation steps, performed along different directions on slices of the 3D domain. At each 2D simulation step, the facies simulated at the previous steps that lie on the current 2D slice are used as conditioning data. The second approach uses hand-drawn two-dimensional TIs and produces complex patterns resembling the geological structures by applying rotation and affinity transformations in the facies simulation. The two techniques are compared using transition probabilities, facies proportions, and connectivity metrics. In the presentation we discuss the benefits of each approach for generating three-dimensional facies models.

  7. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  8. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  9. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  10. Modeling the role of back-arc spreading in controlling 3-D circulation and temperature patterns in subduction zones

    NASA Astrophysics Data System (ADS)

    Kincaid, C.

    2005-12-01

    are nearly uniform across the plate. Results have implications for geochemical and seismic models of 3-D flow in subduction zones influenced by back-arc spreading, such as the Marianas.

  11. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  12. An adaptive learning approach for 3-D surface reconstruction from point clouds.

    PubMed

    Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia

    2008-06-01

    In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.

  13. Non-linear tearing of 3D null point current sheets

    SciTech Connect

    Wyper, P. F. Pontin, D. I.

    2014-08-15

    The manner in which the rate of magnetic reconnection scales with the Lundquist number in realistic three-dimensional (3D) geometries is still an unsolved problem. It has been demonstrated that in 2D rapid non-linear tearing allows the reconnection rate to become almost independent of the Lundquist number (the “plasmoid instability”). Here, we present the first study of an analogous instability in a fully 3D geometry, defined by a magnetic null point. The 3D null current layer is found to be susceptible to an analogous instability but is marginally more stable than an equivalent 2D Sweet-Parker-like layer. Tearing of the sheet creates a thin boundary layer around the separatrix surface, contained within a flux envelope with a hyperbolic structure that mimics a spine-fan topology. Efficient mixing of flux between the two topological domains occurs as the flux rope structures created during the tearing process evolve within this envelope. This leads to a substantial increase in the rate of reconnection between the two domains.

  14. Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input.

    PubMed

    Yu, Lingyun; Efstathiou, K; Isenberg, P; Isenberg, T

    2012-12-01

    Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter - in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.

  15. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  16. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  17. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  18. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  19. Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2.

    PubMed

    Aggarwal, Leena; Gaurav, Abhishek; Thakur, Gohil S; Haque, Zeba; Ganguli, Ashok K; Sheet, Goutam

    2016-01-01

    Three-dimensional (3D) Dirac semimetals exist close to topological phase boundaries which, in principle, should make it possible to drive them into exotic new phases, such as topological superconductivity, by breaking certain symmetries. A practical realization of this idea has, however, hitherto been lacking. Here we show that the mesoscopic point contacts between pure silver (Ag) and the 3D Dirac semimetal Cd3As2 (ref. ) exhibit unconventional superconductivity with a critical temperature (onset) greater than 6 K whereas neither Cd3As2 nor Ag are superconductors. A gap amplitude of 6.5 meV is measured spectroscopically in this phase that varies weakly with temperature and survives up to a remarkably high temperature of 13 K, indicating the presence of a robust normal-state pseudogap. The observations indicate the emergence of a new unconventional superconducting phase that exists in a quantum mechanically confined region under a point contact between a Dirac semimetal and a normal metal.

  20. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  1. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  2. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  3. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  4. Existence of two MHD reconnection modes in a solar 3D magnetic null point topology

    NASA Astrophysics Data System (ADS)

    Pariat, Etienne; Antiochos, Spiro; DeVore, C. Richard; Dalmasse, Kévin

    2012-07-01

    Magnetic topologies with a 3D magnetic null point are common in the solar atmosphere and occur at different spatial scales: such structures can be associated with some solar eruptions, with the so-called pseudo-streamers, and with numerous coronal jets. We have recently developed a series of numerical experiments that model magnetic reconnection in such configurations in order to study and explain the properties of jet-like features. Our model uses our state-of-the-art adaptive-mesh MHD solver ARMS. Energy is injected in the system by line-tied motion of the magnetic field lines in a corona-like configuration. We observe that, in the MHD framework, two reconnection modes eventually appear in the course of the evolution of the system. A very impulsive one, associated with a highly dynamic and fully 3D current sheet, is associated with the energetic generation of a jet. Before and after the generation of the jet, a quasi-steady reconnection mode, more similar to the standard 2D Sweet-Parker model, presents a lower global reconnection rate. We show that the geometry of the magnetic configuration influences the trigger of one or the other mode. We argue that this result carries important implications for the observed link between observational features such as solar jets, solar plumes, and the emission of coronal bright points.

  5. 3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Vaid, Thomas P.

    2014-01-01

    Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…

  6. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  7. A multi-resolution fractal additive scheme for blind watermarking of 3D point data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Wilder, Kathy; Fox, Kevin

    2013-05-01

    We present a fractal feature space for 3D point watermarking to make geospatial systems more secure. By exploiting the self similar nature of fractals, hidden information can be spatially embedded in point cloud data in an acceptable manner as described within this paper. Our method utilizes a blind scheme which provides automatic retrieval of the watermark payload without the need of the original cover data. Our method for locating similar patterns and encoding information in LiDAR point cloud data is accomplished through a look-up table or code book. The watermark is then merged into the point cloud data itself resulting in low distortion effects. With current advancements in computing technologies, such as GPGPUs, fractal processing is now applicable for processing of big data which is present in geospatial as well as other systems. This watermarking technique described within this paper can be important for systems where point data is handled by numerous aerial collectors including analysts use for systems such as a National LiDAR Data Layer.

  8. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques

    PubMed Central

    Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li, Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva

    2011-01-01

    Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE®). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to ∼13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE® system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by ∼9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT∼18% and ∼42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE® and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%–4%). PDD values at 2 cm depth varied from ∼72% for the 40 mm field, down to ∼55% for the 1 mm field. EBT and PRESAGE® PDDs agreed within ∼3% in the typical therapy region (1–4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm). These results indicate good overall consistency between ion-chamber, EBT

  9. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  10. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  11. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms.

  12. Point-, line-, and plane-shaped cellular constructs for 3D tissue assembly.

    PubMed

    Morimoto, Yuya; Hsiao, Amy Y; Takeuchi, Shoji

    2015-12-01

    Microsized cellular constructs such as cellular aggregates and cell-laden hydrogel blocks are attractive cellular building blocks to reconstruct 3D macroscopic tissues with spatially ordered cells in bottom-up tissue engineering. In this regard, microfluidic techniques are remarkable methods to form microsized cellular constructs with high production rate and control of their shapes such as point, line, and plane. The fundamental shapes of the cellular constructs allow for the fabrication of larger arbitrary-shaped tissues by assembling them. This review introduces microfluidic formation methods of microsized cellular constructs and manipulation techniques to assemble them with control of their arrangements. Additionally, we show applications of the cellular constructs to biological studies and clinical treatments and discuss future trends as their potential applications.

  13. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  14. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  15. Is the 3-D magnetic null point with a convective electric field an efficient particle accelerator?

    NASA Astrophysics Data System (ADS)

    Guo, J.-N.; Büchner, J.; Otto, A.; Santos, J.; Marsch, E.; Gan, W.-Q.

    2010-04-01

    Aims: We study the particle acceleration at a magnetic null point in the solar corona, considering self-consistent magnetic fields, plasma flows and the corresponding convective electric fields. Methods: We calculate the electromagnetic fields by 3-D magnetohydrodynamic (MHD) simulations and expose charged particles to these fields within a full-orbit relativistic test-particle approach. In the 3-D MHD simulation part, the initial magnetic field configuration is set to be a potential field obtained by extrapolation from an analytic quadrupolar photospheric magnetic field with a typically observed magnitude. The configuration is chosen so that the resulting coronal magnetic field contains a null. Driven by photospheric plasma motion, the MHD simulation reveals the coronal plasma motion and the self-consistent electric and magnetic fields. In a subsequent test particle experiment the particle energies and orbits (determined by the forces exerted by the convective electric field and the magnetic field around the null) are calculated in time. Results: Test particle calculations show that protons can be accelerated up to 30 keV near the null if the local plasma flow velocity is of the order of 1000 km s-1 (in solar active regions). The final parallel velocity is much higher than the perpendicular velocity so that accelerated particles escape from the null along the magnetic field lines. Stronger convection electric field during big flare explosions can accelerate protons up to 2 MeV and electrons to 3 keV. Higher initial velocities can help most protons to be strongly accelerated, but a few protons also run the risk to be decelerated. Conclusions: Through its convective electric field and due to magnetic nonuniform drifts and de-magnetization process, the 3-D null can act as an effective accelerator for protons but not for electrons. Protons are more easily de-magnetized and accelerated than electrons because of their larger Larmor radii. Notice that macroscopic MHD

  16. Comparison of clinical bracket point registration with 3D laser scanner and coordinate measuring machine

    PubMed Central

    Nouri, Mahtab; Farzan, Arash; Baghban, Ali Reza Akbarzadeh; Massudi, Reza

    2015-01-01

    OBJECTIVE: The aim of the present study was to assess the diagnostic value of a laser scanner developed to determine the coordinates of clinical bracket points and to compare with the results of a coordinate measuring machine (CMM). METHODS: This diagnostic experimental study was conducted on maxillary and mandibular orthodontic study casts of 18 adults with normal Class I occlusion. First, the coordinates of the bracket points were measured on all casts by a CMM. Then, the three-dimensional coordinates (X, Y, Z) of the bracket points were measured on the same casts by a 3D laser scanner designed at Shahid Beheshti University, Tehran, Iran. The validity and reliability of each system were assessed by means of intraclass correlation coefficient (ICC) and Dahlberg's formula. RESULTS: The difference between the mean dimension and the actual value for the CMM was 0.0066 mm. (95% CI: 69.98340, 69.99140). The mean difference for the laser scanner was 0.107 ± 0.133 mm (95% CI: -0.002, 0.24). In each method, differences were not significant. The ICC comparing the two methods was 0.998 for the X coordinate, and 0.996 for the Y coordinate; the mean difference for coordinates recorded in the entire arch and for each tooth was 0.616 mm. CONCLUSION: The accuracy of clinical bracket point coordinates measured by the laser scanner was equal to that of CMM. The mean difference in measurements was within the range of operator errors. PMID:25741826

  17. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  18. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  19. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-07-29

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).

  20. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  1. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  2. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  3. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  4. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals.

    PubMed

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiong-Jun; Xie, X C; Wei, Jian; Wang, Jian

    2016-01-01

    Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material.

  5. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals

    NASA Astrophysics Data System (ADS)

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiong-Jun; Xie, X. C.; Wei, Jian; Wang, Jian

    2016-01-01

    Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material.

  6. Multicolor single-molecule imaging by spectral point-spread-function engineering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shechtman, Yoav; Weiss, Lucien E.; Backer, Adam S.; Moerner, William E.

    2016-02-01

    We extend the information content of the microscope's point-spread-function (PSF) by adding a new degree of freedom: spectral information. We demonstrate controllable encoding of a microscopic emitter's spectral information (color) and 3D position in the shape of the microscope's PSF. The design scheme works by exploiting the chromatic dispersion of an optical element placed in the optical path. By using numerical optimization we design a single physical pattern that yields different desired phase delay patterns for different wavelengths. To demonstrate the method's applicability experimentally, we apply it to super-resolution imaging and to multiple particle tracking.

  7. FOC Point-Spread Function Monitoring - Cycle 4

    NASA Astrophysics Data System (ADS)

    Jedrzejewski, Robert

    1994-01-01

    This proposal will image a UV standard star in the F/96 mode in order to monitor the point-spread function of the HST-COSTAR-FOC channel. Data will be taken every 5-7 weeks, following a planned COSTAR DOB move to ensure that the FOC keeps pace with desorption. The COSTAR DOB and COSTAR FOC lines can be used as generic SUs to plug in whenever such adjustments are needed (e.g. to compensate for secondary mirror moves).

  8. Point spread function reconstruction on the Gemini Canopus bench

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Neichel, Benoit; Veran, Jean-Pierre; Ellerbroek, Brent

    2013-12-01

    This paper discusses an open loop, single-conjugate, point spread function reconstruction experiment performed with a bright calibration source and synthetic turbulence injected on the ground-level deformable mirror of the Multi Conjugate Adaptive Optics Canopus bench at Gemini South. Time histories of high-order Shack-Hartmann wavefront sensor slopes were recorded on the telemetry circular buffer, and time histories of short exposure K-band point spread functions with and without turbulence injected were recorded with the Gemini South Adaptive Optics Imager. We discuss the processing of the data and show that the long exposure background- and tip/tilt-removed turbulence image can be reconstructed at a percent level accuracy from the tip/tilt-removed de-noised wavefront sensor slope covariance matrix and from the long exposure background- and tip/tilt-removed static image. Future experiments are planned with multiple calibration sources at infinite and finite range and turbulence injected on 2 deformable mirrors, aiming at validating the recently published point spread function reconstruction algorithm [Gilles et. al. Appl. Opt. 51, 7443 (2012)] for closed loop laser guide star multi-conjugate adaptive optics.

  9. Aberration averaging using point spread function for scanning projection systems

    NASA Astrophysics Data System (ADS)

    Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi

    2000-07-01

    Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.

  10. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  11. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  12. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  13. The Effect of Dissipation Mechanism and Guide Field Strength on X-line Spreading in 3D Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Shepherd, Lucas; Cassak, P.; Drake, J.; Gosling, J.; Phan, T.; Shay, M. A.

    2013-07-01

    In two-ribbon flares, the fact that the ribbons separate in time is considered evidence of magnetic reconnection. However, in addition to the ribbons separating, they can also elongate (as seen in animations of, for example, the Bastille Day flare). The elongation is undoubtedly related to the reconnection spreading in the out-of-plane direction. Indeed, naturally occurring magnetic reconnection generally begins in a spatially localized region and spreads in the direction perpendicular to the reconnection plane as time progresses. For example, it was suggested that X-line spreading is necessary to explain the observation of X-lines extending more than 390 Earth radii (Phan et al., Nature, 404, 848, 2006), and has been seen in reconnection experiments. A sizeable out-of-plane (guide) magnetic field is present at flare sites and in the solar wind. Here, we study the effect of dissipation mechanism and the strength of the guide field has on X-line spreading. We present results from three-dimensional numerical simulations of magnetic reconnection, comparing spreading with the Hall term to spreading with anomalous resistivity. Applications to solar flares and magnetic reconnection in the solar wind will be discussed.

  14. Toward 3D Printing of Medical Implants: Reduced Lateral Droplet Spreading of Silicone Rubber under Intense IR Curing.

    PubMed

    Stieghorst, Jan; Majaura, Daniel; Wevering, Hendrik; Doll, Theodor

    2016-03-01

    The direct fabrication of silicone-rubber-based individually shaped active neural implants requires high-speed-curing systems in order to prevent extensive spreading of the viscous silicone rubber materials during vulcanization. Therefore, an infrared-laser-based test setup was developed to cure the silicone rubber materials rapidly and to evaluate the resulting spreading in relation to its initial viscosity, the absorbed infrared radiation, and the surface tensions of the fabrication bed's material. Different low-adhesion materials (polyimide, Parylene-C, polytetrafluoroethylene, and fluorinated ethylenepropylene) were used as bed materials to reduce the spreading of the silicone rubber materials by means of their well-known weak surface tensions. Further, O2-plasma treatment was performed on the bed materials to reduce the surface tensions. To calculate the absorbed radiation, the emittance of the laser was measured, and the absorptances of the materials were investigated with Fourier transform infrared spectroscopy in attenuated total reflection mode. A minimum silicone rubber spreading of 3.24% was achieved after 2 s curing time, indicating the potential usability of the presented high-speed-curing process for the direct fabrication of thermal-curing silicone rubbers. PMID:26967063

  15. Toward 3D Printing of Medical Implants: Reduced Lateral Droplet Spreading of Silicone Rubber under Intense IR Curing.

    PubMed

    Stieghorst, Jan; Majaura, Daniel; Wevering, Hendrik; Doll, Theodor

    2016-03-01

    The direct fabrication of silicone-rubber-based individually shaped active neural implants requires high-speed-curing systems in order to prevent extensive spreading of the viscous silicone rubber materials during vulcanization. Therefore, an infrared-laser-based test setup was developed to cure the silicone rubber materials rapidly and to evaluate the resulting spreading in relation to its initial viscosity, the absorbed infrared radiation, and the surface tensions of the fabrication bed's material. Different low-adhesion materials (polyimide, Parylene-C, polytetrafluoroethylene, and fluorinated ethylenepropylene) were used as bed materials to reduce the spreading of the silicone rubber materials by means of their well-known weak surface tensions. Further, O2-plasma treatment was performed on the bed materials to reduce the surface tensions. To calculate the absorbed radiation, the emittance of the laser was measured, and the absorptances of the materials were investigated with Fourier transform infrared spectroscopy in attenuated total reflection mode. A minimum silicone rubber spreading of 3.24% was achieved after 2 s curing time, indicating the potential usability of the presented high-speed-curing process for the direct fabrication of thermal-curing silicone rubbers.

  16. Attribute-based point cloud visualization in support of 3-D classification

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Otepka, Johannes; Kania, Adam

    2016-04-01

    Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large

  17. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  18. Status of point spread function determination for Keck adaptive optics

    NASA Astrophysics Data System (ADS)

    Ragland, S.; Jolissaint, L.; Wizinowich, P.; Neyman, C.

    2014-07-01

    There is great interest in the adaptive optics (AO) science community to overcome the limitations imposed by incomplete knowledge of the point spread function (PSF). To address this limitation a program has been initiated at the W. M. Keck Observatory (WMKO) to demonstrate PSF determination for observations obtained with Keck AO science instruments. This paper aims to give a broad view of the progress achieved in this area. The concept and the implementation are briefly described. The results from on-sky on-axis NGS AO measurements using the NIRC2 science instrument are presented. On-sky performance of the technique is illustrated by comparing the reconstructed PSFs to NIRC2 PSFs. Accuracy of the reconstructed PSFs in terms of Strehl ratio and FWHM are discussed. Science cases for the first phase of science verification have been identified. More technical details of the program are presented elsewhere in the conference.

  19. CCD or CMOS camera calibration using point spread function

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

    2014-06-01

    We present a simple method based on the acquisition of a back-illuminated pinhole to estimate the point spread function (PSF) for CCD (or CMOS) sensor characterization. This method is used to measure the variations in sensitivity of the 2D-sensor array systems. The experimental results show that there is a variation in sensitivity for each position on the CCD of the calibrated camera and the pixel optical center error with respect to the geometrical center is in the range of 1/10th of a pixel. We claim that the pixel error comes most probably from the coherence of the laser light used, or eventually from possible defects in shape, surface quality, optical performance of micro-lenses, and the uniformity of the parameters across the wafer. This may have significant consequences for coherent light imaging using CCD (or CMOS) such as Particle Image Velocimetry.

  20. The point spread function reconstruction by using Moffatlets — I

    NASA Astrophysics Data System (ADS)

    Li, Bai-Shun; Li, Guo-Liang; Cheng, Jun; Peterson, John; Cui, Wei

    2016-09-01

    Shear measurement is a crucial task in current and future weak lensing survey projects. The reconstruction of the point spread function (PSF) is one of the essential steps involved in this process. In this work, we present three different methods, Gaussianlets, Moffatlets and Expectation Maximization Principal Component Analysis (EMPCA), and quantify their efficiency on PSF reconstruction using four sets of simulated Large Synoptic Survey Telescope (LSST) star images. Gaussianlets and Moffatlets are two different sets of basis functions whose profiles are based on Gaussian and Moffat functions respectively. EMPCA is a statistical method performing an iterative procedure to find the principal components (PCs) of an ensemble of star images. Our tests show that: (1) Moffatlets always perform better than Gaussianlets. (2) EMPCA is more compact and flexible, but the noise existing in the PCs will contaminate the size and ellipticity of PSF. By contrast, Moffatlets keep the size and ellipticity very well.

  1. Multicolour localization microscopy by point-spread-function engineering

    NASA Astrophysics Data System (ADS)

    Shechtman, Yoav; Weiss, Lucien E.; Backer, Adam S.; Lee, Maurice Y.; Moerner, W. E.

    2016-09-01

    Super-resolution microscopy has revolutionized cellular imaging in recent years. Methods that rely on sequential localization of single point emitters enable spatial tracking at a resolution of ˜10-40 nm. Moreover, tracking and imaging in three dimensions is made possible by various techniques, including point-spread-function (PSF) engineering—namely, encoding the axial (z) position of a point source in the shape that it creates in the image plane. However, efficient multicolour imaging remains a challenge for localization microscopy—a task of the utmost importance for contextualizing biological data. Normally, multicolour imaging requires sequential imaging, multiple cameras or segmented dedicated fields of view. Here, we demonstrate an alternate strategy: directly encoding the spectral information (colour), in addition to three-dimensional position, in the image. By exploiting chromatic dispersion we design a new class of optical phase masks that simultaneously yield controllably different PSFs for different wavelengths, enabling simultaneous multicolour tracking or super-resolution imaging in a single optical path.

  2. Multicolour localization microscopy by point-spread-function engineering

    NASA Astrophysics Data System (ADS)

    Shechtman, Yoav; Weiss, Lucien E.; Backer, Adam S.; Lee, Maurice Y.; Moerner, W. E.

    2016-09-01

    Super-resolution microscopy has revolutionized cellular imaging in recent years. Methods that rely on sequential localization of single point emitters enable spatial tracking at a resolution of ∼10–40 nm. Moreover, tracking and imaging in three dimensions is made possible by various techniques, including point-spread-function (PSF) engineering—namely, encoding the axial (z) position of a point source in the shape that it creates in the image plane. However, efficient multicolour imaging remains a challenge for localization microscopy—a task of the utmost importance for contextualizing biological data. Normally, multicolour imaging requires sequential imaging, multiple cameras or segmented dedicated fields of view. Here, we demonstrate an alternate strategy: directly encoding the spectral information (colour), in addition to three-dimensional position, in the image. By exploiting chromatic dispersion we design a new class of optical phase masks that simultaneously yield controllably different PSFs for different wavelengths, enabling simultaneous multicolour tracking or super-resolution imaging in a single optical path.

  3. Global Calibration Method of a Camera Using the Constraint of Line Features and 3D World Points

    NASA Astrophysics Data System (ADS)

    Xu, Guan; Zhang, Xinyuan; Li, Xiaotao; Su, Jian; Hao, Zhaobing

    2016-08-01

    We present a reliable calibration method using the constraint of 2D projective lines and 3D world points to elaborate the accuracy of the camera calibration. Based on the relationship between the 3D points and the projective plane, the constraint equations of the transformation matrix are generated from the 3D points and 2D projective lines. The transformation matrix is solved by the singular value decomposition. The proposed method is compared with the point-based calibration to verify the measurement validity. The mean values of the root-mean-square errors using the proposed method are 7.69×10-4, 6.98×10-4, 2.29×10-4, and 1.09×10-3 while the ones of the original method are 8.10×10-4, 1.29×10-2, 2.58×10-2, and 8.12×10-3. Moreover, the average logarithmic errors of the calibration method are evaluated and compared with the former method in different Gaussian noises and projective lines. The variances of the average errors using the proposed method are 1.70×10-5, 1.39×10-4, 1.13×10-4, and 4.06×10-4, which indicates the stability and accuracy of the method.

  4. Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping

    NASA Astrophysics Data System (ADS)

    Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable

  5. Examination about Influence for Precision of 3d Image Measurement from the Ground Control Point Measurement and Surface Matching

    NASA Astrophysics Data System (ADS)

    Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.

    2015-05-01

    As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made

  6. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  7. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  8. Evaluation of Partially Overlapping 3D Point Cloud's Registration by using ICP variant and CloudCompare.

    NASA Astrophysics Data System (ADS)

    Rajendra, Y. D.; Mehrotra, S. C.; Kale, K. V.; Manza, R. R.; Dhumal, R. K.; Nagne, A. D.; Vibhute, A. D.

    2014-11-01

    Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object's surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.

  9. Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.

    2016-06-01

    Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.

  10. Optimizing the rotating point spread function by SLM aided spiral phase modulation

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Bouchal, Z.

    2014-12-01

    We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.

  11. Computer-aided determination of occlusal contact points for dental 3-D CAD.

    PubMed

    Maruyama, Tomoaki; Nakamura, Yasuo; Hayashi, Toyohiko; Kato, Kazumasa

    2006-05-01

    Present dental CAD systems enable us to design functional occlusal tooth surfaces which harmonize with the patient's stomatognathic function. In order to avoid occlusal interferences during tooth excursions, currently available systems usually use the patient's functional occlusal impressions for the design of occlusal contact points. Previous interfere-free design, however, has been done on a trial-and-error basis by using visual inspection. To improve this time-consuming procedure, this paper proposes a computer-aided system for assisting in the determination of the occlusal contact points by visualizing the appropriate regions of the opposing surface. The system can designate such regions from data of the opposing occlusal surfaces and their relative movements can be simulated by using a virtual articulator. Experiments for designing the crown of a lower first molar demonstrated that all contact points selected within the designated regions completely satisfied the required contact or separation during tooth excursions, confirming the effectiveness of our computer-aided procedure.

  12. A closed-form expression of the positional uncertainty for 3D point clouds.

    PubMed

    Bae, Kwang-Ho; Belton, David; Lichti, Derek D

    2009-04-01

    We present a novel closed-form expression of positional uncertainty measured by a near-monostatic and time-of-flight laser range finder with consideration of its measurement uncertainties. An explicit form of the angular variance of the estimated surface normal vector is also derived. This expression is useful for the precise estimation of the surface normal vector and the outlier detection for finding correspondence in order to register multiple three-dimensional point clouds. Two practical algorithms using these expressions are presented: a method for finding optimal local neighbourhood size which minimizes the variance of the estimated normal vector and a resampling method of point clouds.

  13. Effects of cyclone diameter on performance of 1D3D cyclones: Cut point and slope

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cyclones are a commonly used air pollution abatement device for separating particulate matter (PM) from air streams in industrial processes. Several mathematical models have been proposed to predict the cut point of cyclones as cyclone diameter varies. The objective of this research was to determine...

  14. Observation of Magnetic Reconnection at a 3D Null Point Associated with a Solar Eruption

    NASA Astrophysics Data System (ADS)

    Sun, J. Q.; Zhang, J.; Yang, K.; Cheng, X.; Ding, M. D.

    2016-10-01

    Magnetic null has long been recognized as a special structure serving as a preferential site for magnetic reconnection (MR). However, the direct observational study of MR at null-points is largely lacking. Here, we show the observations of MR around a magnetic null associated with an eruption that resulted in an M1.7 flare and a coronal mass ejection. The Geostationary Operational Environmental Satellites X-ray profile of the flare exhibited two peaks at ∼02:23 UT and ∼02:40 UT on 2012 November 8, respectively. Based on the imaging observations, we find that the first and also primary X-ray peak was originated from MR in the current sheet (CS) underneath the erupting magnetic flux rope (MFR). On the other hand, the second and also weaker X-ray peak was caused by MR around a null point located above the pre-eruption MFR. The interaction of the null point and the erupting MFR can be described as a two-step process. During the first step, the erupting and fast expanding MFR passed through the null point, resulting in a significant displacement of the magnetic field surrounding the null. During the second step, the displaced magnetic field started to move back, resulting in a converging inflow and subsequently the MR around the null. The null-point reconnection is a different process from the current sheet reconnection in this flare; the latter is the cause of the main peak of the flare, while the former is the cause of the secondary peak of the flare and the conspicuous high-lying cusp structure.

  15. Historical Buildings Models and Their Handling via 3d Survey: from Points Clouds to User-Oriented Hbim

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2016-06-01

    This paper retraces some research activities and application of 3D survey techniques and Building Information Modelling (BIM) in the environment of Cultural Heritage. It describes the diffusion of as-built BIM approach in the last years in Heritage Assets management, the so-called Built Heritage Information Modelling/Management (BHIMM or HBIM), that is nowadays an important and sustainable perspective in documentation and administration of historic buildings and structures. The work focuses the documentation derived from 3D survey techniques that can be understood like a significant and unavoidable knowledge base for the BIM conception and modelling, in the perspective of a coherent and complete management and valorisation of CH. It deepens potentialities, offered by 3D integrated survey techniques, to acquire productively and quite easilymany 3D information, not only geometrical but also radiometric attributes, helping the recognition, interpretation and characterization of state of conservation and degradation of architectural elements. From these data, they provide more and more high descriptive models corresponding to the geometrical complexity of buildings or aggregates in the well-known 5D (3D + time and cost dimensions). Points clouds derived from 3D survey acquisition (aerial and terrestrial photogrammetry, LiDAR and their integration) are reality-based models that can be use in a semi-automatic way to manage, interpret, and moderately simplify geometrical shapes of historical buildings that are examples, as is well known, of non-regular and complex geometry, instead of modern constructions with simple and regular ones. In the paper, some of these issues are addressed and analyzed through some experiences regarding the creation and the managing of HBIMprojects on historical heritage at different scales, using different platforms and various workflow. The paper focuses on LiDAR data handling with the aim to manage and extract geometrical information; on

  16. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design

    NASA Astrophysics Data System (ADS)

    Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas

    2011-03-01

    The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.

  17. Spreading of excitation in 3-D models of the anisotropic cardiac tissue. I. Validation of the eikonal model.

    PubMed

    Franzone, P C; Guerri, L

    1993-02-01

    In this work we investigate, by means of numerical simulations, the performance of two mathematical models describing the spread of excitation in a three dimensional block representing anisotropic cardiac tissue. The first model is characterized by a reaction-diffusion system in the transmembrane and extracellular potentials v and u. The second model is derived from the first by means of a perturbation technique. It is characterized by an eikonal equation, nonlinear and elliptic in the activation time psi(x). The level surfaces psi(x) = t represent the wave-front positions. The numerical procedures based on the two models were applied to test functions and to excitation processes elicited by local stimulations in a relatively small block. The results are in excellent agreement, and for the same problem the computation time required by the eikonal equation is a small fraction of that needed for the reaction-diffusion system. Thus we have strong evidence that the eikonal equation provides a reliable and numerically efficient model of the excitation process. Moreover, numerical simulations have been performed to validate an approximate model for the extracellular potential based on knowledge of the excitation sequence. The features of the extracellular potential distribution affected by the anisotropic conductivity of the medium were investigated.

  18. Spread of excitation in 3-D models of the anisotropic cardiac tissue. II. Effects of fiber architecture and ventricular geometry.

    PubMed

    Franzone, P C; Guerri, L; Pennacchio, M; Taccardi, B

    1998-01-15

    We investigate a three-dimensional macroscopic model of wave-front propagation related to the excitation process in the left ventricular wall represented by an anisotropic bidomain. The whole left ventricle is modeled, whereas, in a previous paper, only a flat slab of myocardial tissue was considered. The direction of cardiac fibers, which affects the anisotropic conductivity of the myocardium, rotates from the epi- to the endocardium. If the ventricular wall is conceived as a set of packed surfaces, the fibers may be tangent to them or more generally may cross them obliquely; the latter case is described by an "imbrication angle." The effect of a simplified Purkinje network also is investigated. The cardiac excitation process, more particularly the depolarization phase, is modeled by a nonlinear elliptic equation, called an eikonal equation, in the activation time. The numerical solution of this equation is obtained by means of the finite element method, which includes an upwind treatment of the Hamiltonian part of the equation. By means of numerical simulations in an idealized model of the left ventricle, we try to establish whether the eikonal approach contains the essential basic elements for predicting the features of the activation patterns experimentally observed. We discuss and compare these results with those obtained in our previous papers for a flat part of myocardium. The general rules governing the spread of excitation after local stimulations, previously delineated for the flat geometry, are extended to the present, more realistic monoventricular model.

  19. Variability of the point spread function in the water column

    NASA Astrophysics Data System (ADS)

    Voss, Kenneth J.

    1990-09-01

    The Point Spread Function (PSF) is an importantproperty in predicting beam propagation and imaging system performance. An instrument to measure the in situ PSF of ocean water has been built and PSF profiles obtained. This instrument consists of two parts, a flashlamp with cosine emission characteristics, and an imaging solid state camera system. The camera system includes a thermoelectrically cooled CCD array with over 50dB of dynamic range. This allows the camera to measure the steeply peaked PSF over short (lOm) to long (80 m) ranges. Measurements of the PSF in three different locations are presented. One location was a coastal station off San Diego where the water column exhibited a well defmed shallow (approximately 30 meter) mixed layer with a particulate maximum (defined by a maximum in beam attenuation) at the bottom of this layer. During these measurements the PSF was highly variable with depth, as was to be expected due to the dependence of the PSF on particle concentration and size distribution. In the second example the water column was almost homogeneous (as evidenced in the beam attenuation profiles). Hence, the PSF showed very little dependence on depth. Measurements of the variation of the PSF with range are also presented. A simple relationship of the variation of the PSF with angle and optical path length is presented.

  20. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  1. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of

  2. D Geological Outcrop Characterization: Automatic Detection of 3d Planes (azimuth and Dip) Using LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.

    2016-06-01

    Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.

  3. WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading

    SciTech Connect

    Remmes, N; Courneyea, L; Corner, S; Beltran, C; Kemp, B; Kruse, J; Herman, M; Stoker, J

    2014-06-15

    Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak, 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.

  4. Design point variation of 3-D loss and deviation for axial compressor middle stages

    NASA Technical Reports Server (NTRS)

    Roberts, William B.; Serovy, George K.; Sandercock, Donald M.

    1988-01-01

    The available data on middle-stage research compressors operating near design point are used to derive simple empirical models for the spanwise variation of three-dimensional viscous loss coefficients for middle-stage axial compressor blading. The models make it possible to quickly estimate the total loss and deviation across the blade span when the three-dimensional distribution is superimposed on the two-dimensional variation calculated for each blade element. It is noted that extrapolated estimates should be used with caution since the correlations have been derived from a limited data base.

  5. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  6. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  7. Absence of Critical Points of Solutions to the Helmholtz Equation in 3D

    NASA Astrophysics Data System (ADS)

    Alberti, Giovanni S.

    2016-11-01

    The focus of this paper is to show the absence of critical points for the solutions to the Helmholtz equation in a bounded domain {Ωsubset{R}3} , given by { div(a nabla u_{ω}g)-ω qu_{ω}g=0&quad{in Ω,} u_{ω}g=g quad{on partialΩ.} We prove that for an admissible g there exists a finite set of frequencies K in a given interval and an open cover {overline{Ω}=\\cup_{ωin K} Ω_{ω}} such that {|nabla u_{ω}g(x)| > 0} for every {ωin K} and {xinΩ_{ω}} . The set K is explicitly constructed. If the spectrum of this problem is simple, which is true for a generic domain {Ω} , the admissibility condition on g is a generic property.

  8. Vectorial point spread function and optical transfer function in oblique plane imaging.

    PubMed

    Kim, Jeongmin; Li, Tongcang; Wang, Yuan; Zhang, Xiang

    2014-05-01

    Oblique plane imaging, using remote focusing with a tilted mirror, enables direct two-dimensional (2D) imaging of any inclined plane of interest in three-dimensional (3D) specimens. It can image real-time dynamics of a living sample that changes rapidly or evolves its structure along arbitrary orientations. It also allows direct observations of any tilted target plane in an object of which orientational information is inaccessible during sample preparation. In this work, we study the optical resolution of this innovative wide-field imaging method. Using the vectorial diffraction theory, we formulate the vectorial point spread function (PSF) of direct oblique plane imaging. The anisotropic lateral resolving power caused by light clipping from the tilted mirror is theoretically analyzed for all oblique angles. We show that the 2D PSF in oblique plane imaging is conceptually different from the inclined 2D slice of the 3D PSF in conventional lateral imaging. Vectorial optical transfer function (OTF) of oblique plane imaging is also calculated by the fast Fourier transform (FFT) method to study effects of oblique angles on frequency responses.

  9. Well log analysis to assist the interpretation of 3-D seismic data at Milne Point, north slope of Alaska

    USGS Publications Warehouse

    Lee, Myung W.

    2005-01-01

    In order to assess the resource potential of gas hydrate deposits in the North Slope of Alaska, 3-D seismic and well data at Milne Point were obtained from BP Exploration (Alaska), Inc. The well-log analysis has three primary purposes: (1) Estimate gas hydrate or gas saturations from the well logs; (2) predict P-wave velocity where there is no measured P-wave velocity in order to generate synthetic seismograms; and (3) edit P-wave velocities where degraded borehole conditions, such as washouts, affected the P-wave measurement significantly. Edited/predicted P-wave velocities were needed to map the gas-hydrate-bearing horizons in the complexly faulted upper part of 3-D seismic volume. The estimated gas-hydrate/gas saturations from the well logs were used to relate to seismic attributes in order to map regional distribution of gas hydrate inside the 3-D seismic grid. The P-wave velocities were predicted using the modified Biot-Gassmann theory, herein referred to as BGTL, with gas-hydrate saturations estimated from the resistivity logs, porosity, and clay volume content. The effect of gas on velocities was modeled using the classical Biot-Gassman theory (BGT) with parameters estimated from BGTL.

  10. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  11. Recent advances in analysis and prediction of Rock Falls, Rock Slides, and Rock Avalanches using 3D point clouds

    NASA Astrophysics Data System (ADS)

    Abellan, A.; Carrea, D.; Jaboyedoff, M.; Riquelme, A.; Tomas, R.; Royan, M. J.; Vilaplana, J. M.; Gauvin, N.

    2014-12-01

    The acquisition of dense terrain information using well-established 3D techniques (e.g. LiDAR, photogrammetry) and the use of new mobile platforms (e.g. Unmanned Aerial Vehicles) together with the increasingly efficient post-processing workflows for image treatment (e.g. Structure From Motion) are opening up new possibilities for analysing, modeling and predicting rock slope failures. Examples of applications at different scales ranging from the monitoring of small changes at unprecedented level of detail (e.g. sub millimeter-scale deformation under lab-scale conditions) to the detection of slope deformation at regional scale. In this communication we will show the main accomplishments of the Swiss National Foundation project "Characterizing and analysing 3D temporal slope evolution" carried out at Risk Analysis group (Univ. of Lausanne) in close collaboration with the RISKNAT and INTERES groups (Univ. of Barcelona and Univ. of Alicante, respectively). We have recently developed a series of innovative approaches for rock slope analysis using 3D point clouds, some examples include: the development of semi-automatic methodologies for the identification and extraction of rock-slope features such as discontinuities, type of material, rockfalls occurrence and deformation. Moreover, we have been improving our knowledge in progressive rupture characterization thanks to several algorithms, some examples include the computing of 3D deformation, the use of filtering techniques on permanently based TLS, the use of rock slope failure analogies at different scales (laboratory simulations, monitoring at glacier's front, etc.), the modelling of the influence of external forces such as precipitation on the acceleration of the deformation rate, etc. We have also been interested on the analysis of rock slope deformation prior to the occurrence of fragmental rockfalls and the interaction of this deformation with the spatial location of future events. In spite of these recent advances

  12. Registration of 3D point clouds and meshes: a survey from rigid to nonrigid.

    PubMed

    Tam, Gary K L; Cheng, Zhi-Quan; Lai, Yu-Kun; Langbein, Frank C; Liu, Yonghuai; Marshall, David; Martin, Ralph R; Sun, Xian-Fang; Rosin, Paul L

    2013-07-01

    Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.

  13. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    PubMed Central

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  14. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  15. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  16. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Vosselman, George

    2015-07-01

    Point clouds generated from airborne oblique images have become a suitable source for detailed building damage assessment after a disaster event, since they provide the essential geometric and radiometric features of both roof and façades of the building. However, they often contain gaps that result either from physical damage or from a range of image artefacts or data acquisition conditions. A clear understanding of those reasons, and accurate classification of gap-type, are critical for 3D geometry-based damage assessment. In this study, a methodology was developed to delineate buildings from a point cloud and classify the present gaps. The building delineation process was carried out by identifying and merging the roof segments of single buildings from the pre-segmented 3D point cloud. This approach detected 96% of the buildings from a point cloud generated using airborne oblique images. The gap detection and classification methods were tested using two other data sets obtained with Unmanned Aerial Vehicle (UAV) images with a ground resolution of around 1-2 cm. The methods detected all significant gaps and correctly identified the gaps due to damage. The gaps due to damage were identified based on the surrounding damage pattern, applying Gabor wavelets and a histogram of gradient orientation features. Two learning algorithms - SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors. The learning model based on Gabor features with Random Forests performed best, identifying 95% of the damaged regions. The generalization performance of the supervised model, however, was less successful: quality measures decreased by around 15-30%.

  17. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  18. Reconstruction, Quantification, and Visualization of Forest Canopy Based on 3d Triangulations of Airborne Laser Scanning Point Data

    NASA Astrophysics Data System (ADS)

    Vauhkonen, J.

    2015-03-01

    Reconstruction of three-dimensional (3D) forest canopy is described and quantified using airborne laser scanning (ALS) data with densities of 0.6-0.8 points m-2 and field measurements aggregated at resolutions of 400-900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty) space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i) to optimize the degree of filtration with respect to the field measurements, and (ii) to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2) with the stem volume considered, both alone (R2=0.65) and together with other predictors (R2=0.78). When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  19. 3-D seismic over the Fausse Pointe Field: A case history of acquisition in a harsh environment

    SciTech Connect

    Duncan, P.M.; Nester, D.C.; Martin, J.A.; Moles, J.R.

    1995-12-31

    A 50 square mile 3D seismic survey was successfully acquired over Fausse Point Field in the latter half of 1994. The geophysical and logistical challenges of this project were immense. The steep dips and extensive range of target depths required a large shoot area with a relatively fine sampling interval. The surface, while essentially flat, included areas of cane field, crawfish ponds, thick brush, swamp, open lakes and deep canals -- all typical of southern Louisiana. Planning and permitting of the survey began in late 1993. Field operations began in June 1994 and were complete in January 1995. Field personnel numbered 150 at the peak of operations. More than 19,000 crew hours were required to complete the job at a cost of over 5,000,000. The project was complete on time and on budget. The resulting images of the salt dome and surrounding rocks are not only beautiful but are revealing many opportunities for new hydrocarbon development.

  20. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  1. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  2. Polarization Aberrations in Astronomical Telescopes: The Point Spread Function

    NASA Astrophysics Data System (ADS)

    Breckinridge, James B.; Lam, Wai Sze T.; Chipman, Russell A.

    2015-05-01

    Detailed knowledge of the image of the point spread function (PSF) is necessary to optimize astronomical coronagraph masks and to understand potential sources of errors in astrometric measurements. The PSF for astronomical telescopes and instruments depends not only on geometric aberrations and scalar wave diffraction but also on those wavefront errors introduced by the physical optics and the polarization properties of reflecting and transmitting surfaces within the optical system. These vector wave aberrations, called polarization aberrations, result from two sources: (1) the mirror coatings necessary to make the highly reflecting mirror surfaces, and (2) the optical prescription with its inevitable non-normal incidence of rays on reflecting surfaces. The purpose of this article is to characterize the importance of polarization aberrations, to describe the analytical tools to calculate the PSF image, and to provide the background to understand how astronomical image data may be affected. To show the order of magnitude of the effects of polarization aberrations on astronomical images, a generic astronomical telescope configuration is analyzed here by modeling a fast Cassegrain telescope followed by a single 90° deviation fold mirror. All mirrors in this example use bare aluminum reflective coatings and the illumination wavelength is 800 nm. Our findings for this example telescope are: (1) The image plane irradiance distribution is the linear superposition of four PSF images: one for each of the two orthogonal polarizations and one for each of two cross-coupled polarization terms. (2) The PSF image is brighter by 9% for one polarization component compared to its orthogonal state. (3) The PSF images for two orthogonal linearly polarization components are shifted with respect to each other, causing the PSF image for unpolarized point sources to become slightly elongated (elliptical) with a centroid separation of about 0.6 mas. This is important for both astrometry

  3. Three-dimensional localization precision of the double-helix point spread function versus astigmatism and biplane

    NASA Astrophysics Data System (ADS)

    Badieirostami, Majid; Lew, Matthew D.; Thompson, Michael A.; Moerner, W. E.

    2010-10-01

    Wide-field microscopy with a double-helix point spread function (DH-PSF) provides three-dimensional (3D) position information beyond the optical diffraction limit. We compare the theoretical localization precision for an unbiased estimator of the DH-PSF to that for 3D localization by astigmatic and biplane imaging using Fisher information analysis including pixelation and varying levels of background. The DH-PSF results in almost constant localization precision in all three dimensions for a 2 μm thick depth of field while astigmatism and biplane improve the axial localization precision over smaller axial ranges. For high signal-to-background ratio, the DH-PSF on average achieves better localization precision.

  4. Three-dimensional localization precision of the double-helix point spread function versus astigmatism and biplane.

    PubMed

    Badieirostami, Majid; Lew, Matthew D; Thompson, Michael A; Moerner, W E

    2010-10-18

    Wide-field microscopy with a double-helix point spread function (DH-PSF) provides three-dimensional (3D) position information beyond the optical diffraction limit. We compare the theoretical localization precision for an unbiased estimator of the DH-PSF to that for 3D localization by astigmatic and biplane imaging using Fisher information analysis including pixelation and varying levels of background. The DH-PSF results in almost constant localization precision in all three dimensions for a 2 μm thick depth of field while astigmatism and biplane improve the axial localization precision over smaller axial ranges. For high signal-to-background ratio, the DH-PSF on average achieves better localization precision.

  5. Spread of excitation in 3-D models of the anisotropic cardiac tissue. III. Effects of ventricular geometry and fiber structure on the potential distribution.

    PubMed

    Colli Franzone, P; Guerri, L; Pennacchio, M; Taccardi, B

    1998-07-01

    In a previous paper we studied the spread of excitation in a simplified model of the left ventricle, affected by fiber structure and obliqueness, curvature of the wall and Purkinje network. In the present paper we investigate the extracellular potential distribution u in the same ventricular model. Given the transmembrane potential v, associated with the spreading excitation, the extracellular potential u is obtained as solution of a linear elliptic equation with the source term related to v. The potential distributions were computed for point stimulations at different intramural depths. The results of the simulations enabled us to identify a number of common features which appears in all the potential patterns irrespective of pacing site. In addition, by splitting the sources into an axial and conormal component, we were able to evaluate the contribution of the classical uniform dipole layer to the total potential field and the role of the superimposed axial component.

  6. Semi-automatic characterization of fractured rock masses using 3D point clouds: discontinuity orientation, spacing and SMR geomechanical classification

    NASA Astrophysics Data System (ADS)

    Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel

    2015-04-01

    Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction

  7. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency.

  8. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832

  9. Crustal thickness from 3D MCS data collected over the fast-spreading East Pacific Rise at 9°50'N

    NASA Astrophysics Data System (ADS)

    Aghaei, O.; Nedimović, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.

    2011-12-01

    We compute, analyze and present crustal thickness variations for a section of the fast-spreading East Pacific Rise (EPR). The area of 3D coverage is between 9°38'N and 9°58' N (~1000 km2), where the documented eruptions of 1990-91 and 2005-06 occurred. The crustal thickness is computed by depth converting the two-way reflection travel times from the seafloor to the Moho. The seafloor and Moho reflections are picked on the migrated stack volume produced from the 3D multichannel seismic (MCS) data collected on R/V Marcus G. Langseth in summer of 2008 during cruise MGL0812. The crustal velocities used for depth conversion were computed by Canales et al. (2003; 2011) by simultaneous inversion of seismic refractions and wide-angle Moho reflection traveltimes from four ridge-parallel and one ridge-perpendicular ocean bottom seismometer (OBS) profile for which data were collected during the 1998 UNDERSHOOT experiment. The MCS data analysis included 1D and 2D filtering, offset-dependent spherical divergence correction, surface-consistent amplitude correction, common midpoint (CMP) sort with flex binning, velocity analysis, normal moveout, and CMP stretch mute. The poststack processing includes seafloor multiple mute and 3D Kirchhoff poststack time migration. Here we use the crustal thickness and Moho seismic signature variations to detail their relationship with ridge segmentation, crustal age, bathymetry, and on- and off-axis magmatism. On the western flank (Pacific plate) from 9°41' to 9°48', the Moho reflection is strong. From 9°48' to 9°52', the Moho reflection varies from moderate to weak and disappears from ~3 km to ~9 km from the ridge axis. On the eastern flank (Cocos plate) from 9°41' to 9°51', the Moho reflection varies from strong to moderate. From 9°51' to 9°54' the Moho reflection varies from moderate to weak and disappears beneath a region ~3 km to ~9 km from the axis. On the Cocos plate, across-axis crustal thickness variations (5.5-6.2 km) show a

  10. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  11. Thick fibrous composite reinforcements behave as special second-gradient materials: three-point bending of 3D interlocks

    NASA Astrophysics Data System (ADS)

    Madeo, Angela; Ferretti, Manuel; dell'Isola, Francesco; Boisse, Philippe

    2015-08-01

    In this paper, we propose to use a second gradient, 3D orthotropic model for the characterization of the mechanical behavior of thick woven composite interlocks. Such second-gradient theory is seen to directly account for the out-of-plane bending rigidity of the yarns at the mesoscopic scale which is, in turn, related to the bending stiffness of the fibers composing the yarns themselves. The yarns' bending rigidity evidently affects the macroscopic bending of the material and this fact is revealed by presenting a three-point bending test on specimens of composite interlocks. These specimens differ one from the other for the different relative direction of the yarns with respect to the edges of the sample itself. Both types of specimens are independently seen to take advantage of a second-gradient modeling for the correct description of their macroscopic bending modes. The results presented in this paper are essential for the setting up of a correct continuum framework suitable for the mechanical characterization of composite interlocks. The few second-gradient parameters introduced by the present model are all seen to be associated with peculiar deformation modes of the mesostructure (bending of the yarns) and are determined by inverse approach. Although the presented results undoubtedly represent an important step toward the complete characterization of the mechanical behavior of fibrous composite reinforcements, more complex hyperelastic second-gradient constitutive laws must be conceived in order to account for the description of all possible mesostructure-induced deformation patterns.

  12. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model. PMID:25372707

  13. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  14. Automatic reconstruction of 3D urban landscape by computing connected regions and assigning them an average altitude from LiDAR point cloud image

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2014-10-01

    The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.

  15. Point spread function of the optical needle super-oscillatory lens

    SciTech Connect

    Roy, Tapashree; Rogers, Edward T. F.; Yuan, Guanghui; Zheludev, Nikolay I.

    2014-06-09

    Super-oscillatory optical lenses are known to achieve sub-wavelength focusing. In this paper, we analyse the imaging capabilities of a super-oscillatory lens by studying its point spread function. We experimentally demonstrate that a super-oscillatory lens can generate a point spread function 24% smaller than that dictated by the diffraction limit and has an effective numerical aperture of 1.31 in air. The object-image linear displacement property of these lenses is also investigated.

  16. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  17. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  18. Electronic and magnetic structure of 3d-transition-metal point defects in silicon calculated from first principles

    NASA Astrophysics Data System (ADS)

    Beeler, F.; Andersen, O. K.; Scheffler, M.

    1990-01-01

    We describe spin-unrestricted self-consistent linear muffin-tin-orbital (LMTO) Green-function calculations for Sc, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu transition-metal impurities in crystalline silicon. Both defect sites of tetrahedral symmetry are considered. All possible charge states with their spin multiplicities, magnetization densities, and energy levels are discussed and explained with a simple physical picture. The early transition-metal interstitial and late transition-metal substitutional 3d ions are found to have low spin. This is in conflict with the generally accepted crystal-field model of Ludwig and Woodbury, but not with available experimental data. For the interstitial 3d ions, the calculated deep donor and acceptor levels reproduce all experimentally observed transitions. For substitutional 3d ions, a large number of predictions is offered to be tested by future experimental studies.

  19. Theory of point-spread function artifacts due to structured mid-spatial frequency surface errors.

    PubMed

    Tamkin, John M; Dallas, William J; Milster, Tom D

    2010-09-01

    Optical design and tolerancing of aspheric or free-form surfaces require attention to surface form, structured surface errors, and nonstructured errors. We describe structured surface error profiles and effects on the image point-spread function using harmonic (Fourier) decomposition. Surface errors over the beam footprint map onto the pupil, where multiple structured surface frequencies mix to create sum and difference diffraction orders in the image plane at each field point. Difference frequencies widen the central lobe of the point-spread function and summation frequencies create ghost images.

  20. Visualization of Buffer Capacity with 3-D "Topo" Surfaces: Buffer Ridges, Equivalence Point Canyons and Dilution Ramps

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul

    2016-01-01

    BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…

  1. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  2. Investigation of the numerics of point spread function integration in single molecule localization.

    PubMed

    Chao, Jerry; Ram, Sripad; Lee, Taiyoon; Ward, E Sally; Ober, Raimund J

    2015-06-29

    The computation of point spread functions, which are typically used to model the image profile of a single molecule, represents a central task in the analysis of single molecule microscopy data. To determine how the accuracy of the computation affects how well a single molecule can be localized, we investigate how the fineness with which the point spread function is integrated over an image pixel impacts the performance of the maximum likelihood location estimator. We consider both the Airy and the two-dimensional Gaussian point spread functions. Our results show that the point spread function needs to be adequately integrated over a pixel to ensure that the estimator closely recovers the true location of the single molecule with an accuracy that is comparable to the best possible accuracy as determined using the Fisher information formalism. Importantly, if integration with an insufficiently fine step size is carried out, the resulting estimates can be significantly different from the true location, particularly when the image data is acquired at relatively low magnifications. We also present a methodology for determining an adequate step size for integrating the point spread function. PMID:26191698

  3. Successful gas hydrate prospecting using 3D seismic - A case study for the Mt. Elbert prospect, Milne Point, North Slope Alaska

    USGS Publications Warehouse

    Inks, T.L.; Agena, W.F.

    2008-01-01

    In February 2007, the Mt. Elbert Prospect stratigraphic test well, Milne Point, North Slope Alaska encountered thick methane gas hydrate intervals, as predicted by 3D seismic interpretation and modeling. Methane gas hydrate-saturated sediment was found in two intervals, totaling more than 100 ft., identified and mapped based on seismic character and wavelet modeling.

  4. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  5. Hubble Space Telescope Faint Object Camera calculated point-spread functions.

    PubMed

    Lyon, R G; Dorband, J E; Hollis, J M

    1997-03-10

    A set of observed noisy Hubble Space Telescope Faint Object Camera point-spread functions is used to recover the combined Hubble and Faint Object Camera wave-front error. The low-spatial-frequency wave-front error is parameterized in terms of a set of 32 annular Zernike polynomials. The midlevel and higher spatial frequencies are parameterized in terms of set of 891 polar-Fourier polynomials. The parameterized wave-front error is used to generate accurate calculated point-spread functions, both pre- and post-COSTAR (corrective optics space telescope axial replacement), suitable for image restoration at arbitrary wavelengths. We describe the phase-retrieval-based recovery process and the phase parameterization. Resultant calculated precorrection and postcorrection point-spread functions are shown along with an estimate of both pre- and post-COSTAR spherical aberration. PMID:18250862

  6. Hubble Space Telescope Faint Object Camera calculated point-spread functions.

    PubMed

    Lyon, R G; Dorband, J E; Hollis, J M

    1997-03-10

    A set of observed noisy Hubble Space Telescope Faint Object Camera point-spread functions is used to recover the combined Hubble and Faint Object Camera wave-front error. The low-spatial-frequency wave-front error is parameterized in terms of a set of 32 annular Zernike polynomials. The midlevel and higher spatial frequencies are parameterized in terms of set of 891 polar-Fourier polynomials. The parameterized wave-front error is used to generate accurate calculated point-spread functions, both pre- and post-COSTAR (corrective optics space telescope axial replacement), suitable for image restoration at arbitrary wavelengths. We describe the phase-retrieval-based recovery process and the phase parameterization. Resultant calculated precorrection and postcorrection point-spread functions are shown along with an estimate of both pre- and post-COSTAR spherical aberration.

  7. Scattering and the Point Spread Function of the New Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1996-01-01

    Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called

  8. Quantitative data quality metrics for 3D laser radar systems

    NASA Astrophysics Data System (ADS)

    Stevens, Jeffrey R.; Lopez, Norman A.; Burton, Robin R.

    2011-06-01

    Several quantitative data quality metrics for three dimensional (3D) laser radar systems are presented, namely: X-Y contrast transfer function, Z noise, Z resolution, X-Y edge & line spread functions, 3D point spread function and data voids. These metrics are calculated from both raw and/or processed point cloud data, providing different information regarding the performance of 3D imaging laser radar systems and the perceptual quality attributes of 3D datasets. The discussion is presented within the context of 3D imaging laser radar systems employing arrays of Geiger-mode Avalanche Photodiode (GmAPD) detectors, but the metrics may generally be applied to linear mode systems as well. An example for the role of these metrics in comparison of noise removal algorithms is also provided.

  9. Different effects of bladder distention on point A-based and 3D-conformal intracavitary brachytherapy planning for cervical cancer.

    PubMed

    Ju, Sang Gyu; Huh, Seung Jae; Shin, Jung Suk; Park, Won; Nam, Heerim; Bae, Sunhyun; Oh, Dongryul; Hong, Chae-Seon; Kim, Jin Sung; Han, Youngyih; Choi, Doo Ho

    2013-03-01

    This study sought to evaluate the differential effects of bladder distention on point A-based (AICBT) and three-dimensional conformal intracavitary brachytherapy (3D-ICBT) planning for cervical cancer. Two sets of CT scans were obtained for ten patients to evaluate the effect of bladder distention. After the first CT scan, with an empty bladder, a second set of CT scans was obtained with the bladder filled. The clinical target volume (CTV), bladder, rectum, and small bowel were delineated on each image set. The AICBT and 3D-ICBT plans were generated, and we compared the different planning techniques with respect to the dose characteristics of CTV and organs at risk. As a result of bladder distention, the mean dose (D50) was decreased significantly and geometrical variations were observed in the bladder and small bowel, with acceptable minor changes in the CTV and rectum. The average D2 cm(3)and D1 cm(3)showed a significant change in the bladder and small bowel with AICBT; however, no change was detected with the 3D-ICBT planning. No significant dose change in the CTV or rectum was observed with either the AICBT or the 3D-ICBT plan. The effect of bladder distention on dosimetrical change in 3D-ICBT planning appears to be minimal, in comparison with AICBT planning.

  10. The point spread function of the soft X-ray telescope aboard Yohkoh

    NASA Technical Reports Server (NTRS)

    Martens, Petrus C.; Acton, Loren W.; Lemen, James R.

    1995-01-01

    The point spread function of the SXT telescope aboard Yohkoh has been measured in flight configuration in three different X-ray lines at White Sands Missile Range. We have fitted these data with an elliptical generalization of the Moffat function. Our fitting method consists of chi squared minimization in Fourier space, especially designed for matching of sharply peaked functions. We find excellent fits with a reduced chi squared of order unity or less for single exposure point spread functions over most of the CCD. Near the edges of the CCD the fits are less accurate due to vignetting. From fitting results with summation of multiple exposures we find a systematic error in the fitting function of the order of 3% near the peak of the point spread function, which is close to the photon noise for typical SXT images in orbit. We find that the full width to half maximum and fitting parameters vary significantly with CCD location. However, we also find that point spread functions measured at the same location are consistent to one another within the limit determined by photon noise. A 'best' analytical fit to the PSF as function of position on the CCD is derived for use in SXT image enhancemnent routines. As an aside result we have found that SXT can determine the location of point sources to about a quarter of a 2.54 arc sec pixel.

  11. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937

  12. Evaluation of the Convergence Region of an Automated Registration Method for 3D Laser Scanner Point Clouds.

    PubMed

    Bae, Kwang-Ho

    2009-01-01

    Using three dimensional point clouds from both simulated and real datasets from close and terrestrial laser scanners, the rotational and translational convergence regions of Geometric Primitive Iterative Closest Points (GP-ICP) are empirically evaluated. The results demonstrate the GP-ICP has a larger rotational convergence region than the existing methods, e.g., the Iterative Closest Point (ICP).

  13. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  14. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  15. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  16. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  17. SHAPES - Spatial, High-Accuracy, Position-Encoding Sensor for multi-point, 3-D position measurement of large flexible structures

    NASA Technical Reports Server (NTRS)

    Nerheim, N. M

    1987-01-01

    An electro-optical position sensor for precise simultaneous measurement of the 3-D positions of multiple points on large space structures is described. The sensor data rate is sufficient for most control purposes. Range is determined by time-of-flight correlation of short laser pulses returned from retroreflector targets using a streak tube/CCD detector. Angular position is determined from target image locations on a second CCD. Experimental verification of dynamic ranging to multiple targets is discussed.

  18. Uav-Based Acquisition of 3d Point Cloud - a Comparison of a Low-Cost Laser Scanner and Sfm-Tools

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Maas, H.-G.

    2015-08-01

    The Project ADFEX (Adaptive Federative 3D Exploration of Multi Robot System) pursues the goal to develop a time- and cost-efficient system for exploration and monitoring task of unknown areas or buildings. A fleet of unmanned aerial vehicles equipped with appropriate sensors (laser scanner, RGB camera, near infrared camera, thermal camera) were designed and built. A typical operational scenario may include the exploration of the object or area of investigation by an UAV equipped with a laser scanning range finder to generate a rough point cloud in real time to provide an overview of the object on a ground station as well as an obstacle map. The data about the object enables the path planning for the robot fleet. Subsequently, the object will be captured by a RGB camera mounted on the second flying robot for the generation of a dense and accurate 3D point cloud by using of structure from motion techniques. In addition, the detailed image data serves as basis for a visual damage detection on the investigated building. This paper focuses on our experience with use of a low-cost light-weight Hokuyo laser scanner onboard an UAV. The hardware components for laser scanner based 3D point cloud acquisition are discussed, problems are demonstrated and analyzed, and a quantitative analysis of the accuracy potential is shown as well as in comparison with structure from motion-tools presented.

  19. A New Stochastic Modeling of 3-D Mud Drapes Inside Point Bar Sands in Meandering River Deposits

    SciTech Connect

    Yin, Yanshu

    2013-12-15

    The environment of major sediments of eastern China oilfields is a meandering river where mud drapes inside point bar sand occur and are recognized as important factors for underground fluid flow and distribution of the remaining oil. The present detailed architectural analysis, and the related mud drapes' modeling inside a point bar, is practical work to enhance oil recovery. This paper illustrates a new stochastic modeling of mud drapes inside point bars. The method is a hierarchical strategy and composed of three nested steps. Firstly, the model of meandering channel bodies is established using the Fluvsim method. Each channel centerline obtained from the Fluvsim is preserved for the next simulation. Secondly, the curvature ratios of each meandering river at various positions are calculated to determine the occurrence of each point bar. The abandoned channel is used to characterize the geometry of each defined point bar. Finally, mud drapes inside each point bar are predicted through random sampling of various parameters, such as number, horizontal intervals, dip angle, and extended distance of mud drapes. A dataset, collected from a reservoir in the Shengli oilfield of China, was used to illustrate the mud drapes' building procedure proposed in this paper. The results show that the inner architectural elements of the meandering river are depicted fairly well in the model. More importantly, the high prediction precision from the cross validation of five drilled wells shows the practical value and significance of the proposed method.

  20. Correlation of Point B and Lymph Node Dose in 3D-Planned High-Dose-Rate Cervical Cancer Brachytherapy

    SciTech Connect

    Lee, Larissa J.; Sadow, Cheryl A.; Russell, Anthony; Viswanathan, Akila N.

    2009-11-01

    Purpose: To compare high dose rate (HDR) point B to pelvic lymph node dose using three-dimensional-planned brachytherapy for cervical cancer. Methods and Materials: Patients with FIGO Stage IB-IIIB cervical cancer received 70 tandem HDR applications using CT-based treatment planning. The obturator, external, and internal iliac lymph nodes (LN) were contoured. Per fraction (PF) and combined fraction (CF) right (R), left (L), and bilateral (Bil) nodal doses were analyzed. Point B dose was compared with LN dose-volume histogram (DVH) parameters by paired t test and Pearson correlation coefficients. Results: Mean PF and CF doses to point B were R 1.40 Gy +- 0.14 (CF: 7 Gy), L 1.43 +- 0.15 (CF: 7.15 Gy), and Bil 1.41 +- 0.15 (CF: 7.05 Gy). The correlation coefficients between point B and the D100, D90, D50, D2cc, D1cc, and D0.1cc LN were all less than 0.7. Only the D2cc to the obturator and the D0.1cc to the external iliac nodes were not significantly different from the point B dose. Significant differences between R and L nodal DVHs were seen, likely related to tandem deviation from irregular tumor anatomy. Conclusions: With HDR brachytherapy for cervical cancer, per fraction nodal dose approximates a dose equivalent to teletherapy. Point B is a poor surrogate for dose to specific nodal groups. Three-dimensional defined nodal contours during brachytherapy provide a more accurate reflection of delivered dose and should be part of comprehensive planning of the total dose to the pelvic nodes, particularly when there is evidence of pathologic involvement.

  1. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    SciTech Connect

    Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  2. A 4-point in-situ method to locate a discrete gamma-ray source in 3-D space.

    PubMed

    Byun, Jong-In; Choi, Hee-Yeoul; Yun, Ju-Yong

    2010-02-01

    The determination of the source position (x,y,z) of a discrete gamma-ray source using peak count rates from four measurement points was studied. We derived semi-empirical formulas to find the position under the condition to neglect attenuation effects by obstacles between the target source and the detector. To validate the methodology, we performed the locating experiments for a (137)Cs small volume source placed at 10 different positions on the floor of a laboratory using the formulas derived in this study. In this study, a portable HPGe gamma spectrometry system with a virtual point detector concept was used. The calculation results for the source positions were compared with reference values measured with a rule. The applicability of the methodology was estimated based on the differences of the results. PMID:19932029

  3. STRONG GRAVITATIONAL LENS MODELING WITH SPATIALLY VARIANT POINT-SPREAD FUNCTIONS

    SciTech Connect

    Rogers, Adam; Fiege, Jason D.

    2011-12-10

    Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.

  4. Visualization of molecular fluorescence point spread functions via remote excitation switching fluorescence microscopy

    PubMed Central

    Su, Liang; Lu, Gang; Kenens, Bart; Rocha, Susana; Fron, Eduard; Yuan, Haifeng; Chen, Chang; Van Dorpe, Pol; Roeffaers, Maarten B. J.; Mizuno, Hideaki; Hofkens, Johan; Hutchison, James A.; Uji-i, Hiroshi

    2015-01-01

    The enhancement of molecular absorption, emission and scattering processes by coupling to surface plasmon polaritons on metallic nanoparticles is a key issue in plasmonics for applications in (bio)chemical sensing, light harvesting and photocatalysis. Nevertheless, the point spread functions for single-molecule emission near metallic nanoparticles remain difficult to characterize due to fluorophore photodegradation, background emission and scattering from the plasmonic structure. Here we overcome this problem by exciting fluorophores remotely using plasmons propagating along metallic nanowires. The experiments reveal a complex array of single-molecule fluorescence point spread functions that depend not only on nanowire dimensions but also on the position and orientation of the molecular transition dipole. This work has consequences for both single-molecule regime-sensing and super-resolution imaging involving metallic nanoparticles and opens the possibilities for fast size sorting of metallic nanoparticles, and for predicting molecular orientation and binding position on metallic nanoparticles via far-field optical imaging. PMID:25687887

  5. Visualization of molecular fluorescence point spread functions via remote excitation switching fluorescence microscopy.

    PubMed

    Su, Liang; Lu, Gang; Kenens, Bart; Rocha, Susana; Fron, Eduard; Yuan, Haifeng; Chen, Chang; Van Dorpe, Pol; Roeffaers, Maarten B J; Mizuno, Hideaki; Hofkens, Johan; Hutchison, James A; Uji-I, Hiroshi

    2015-01-01

    The enhancement of molecular absorption, emission and scattering processes by coupling to surface plasmon polaritons on metallic nanoparticles is a key issue in plasmonics for applications in (bio)chemical sensing, light harvesting and photocatalysis. Nevertheless, the point spread functions for single-molecule emission near metallic nanoparticles remain difficult to characterize due to fluorophore photodegradation, background emission and scattering from the plasmonic structure. Here we overcome this problem by exciting fluorophores remotely using plasmons propagating along metallic nanowires. The experiments reveal a complex array of single-molecule fluorescence point spread functions that depend not only on nanowire dimensions but also on the position and orientation of the molecular transition dipole. This work has consequences for both single-molecule regime-sensing and super-resolution imaging involving metallic nanoparticles and opens the possibilities for fast size sorting of metallic nanoparticles, and for predicting molecular orientation and binding position on metallic nanoparticles via far-field optical imaging. PMID:25687887

  6. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  7. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  8. Dynamic topology and flux rope evolution during non-linear tearing of 3D null point current sheets

    SciTech Connect

    Wyper, P. F. Pontin, D. I.

    2014-10-15

    In this work, the dynamic magnetic field within a tearing-unstable three-dimensional current sheet about a magnetic null point is described in detail. We focus on the evolution of the magnetic null points and flux ropes that are formed during the tearing process. Generally, we find that both magnetic structures are created prolifically within the layer and are non-trivially related. We examine how nulls are created and annihilated during bifurcation processes, and describe how they evolve within the current layer. The type of null bifurcation first observed is associated with the formation of pairs of flux ropes within the current layer. We also find that new nulls form within these flux ropes, both following internal reconnection and as adjacent flux ropes interact. The flux ropes exhibit a complex evolution, driven by a combination of ideal kinking and their interaction with the outflow jets from the main layer. The finite size of the unstable layer also allows us to consider the wider effects of flux rope generation. We find that the unstable current layer acts as a source of torsional magnetohydrodynamic waves and dynamic braiding of magnetic fields. The implications of these results to several areas of heliophysics are discussed.

  9. Super-resolution method using sparse regularization for point-spread function recovery

    NASA Astrophysics Data System (ADS)

    Ngolè Mboula, F. M.; Starck, J.-L.; Ronayette, S.; Okumura, K.; Amiaux, J.

    2015-03-01

    In large-scale spatial surveys, such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one may consider using a super-resolution (SR) method to recover aliased frequencies, prior to further analysis. This is particularly relevant for point-source images, which provide direct measurements of the instrument point-spread function (PSF). We introduce SParse Recovery of InsTrumental rEsponse (SPRITE), which is an SR algorithm using a sparse analysis prior. We show that such a prior provides significant improvements over existing methods, especially on low signal-to-noise ratio PSFs.

  10. 3D-Modeling of Vegetation from Lidar Point Clouds and Assessment of its Impact on Façade Solar Irradiation

    NASA Astrophysics Data System (ADS)

    Peronato, G.; Rey, E.; Andersen, M.

    2016-10-01

    The presence of vegetation can significantly affect the solar irradiation received on building surfaces. Due to the complex shape and seasonal variability of vegetation geometry, this topic has gained much attention from researchers. However, existing methods are limited to rooftops as they are based on 2.5D geometry and use simplified radiation algorithms based on view-sheds. This work contributes to overcoming some of these limitations, providing support for 3D geometry to include facades. Thanks to the use of ray-tracing-based simulations and detailed characterization of the 3D surfaces, we can also account for inter-reflections, which might have a significant impact on façade irradiation. In order to construct confidence intervals on our results, we modeled vegetation from LiDAR point clouds as 3D convex hulls, which provide the biggest volume and hence the most conservative obstruction scenario. The limits of the confidence intervals were characterized with some extreme scenarios (e.g. opaque trees and absence of trees). Results show that uncertainty can vary significantly depending on the characteristics of the urban area and the granularity of the analysis (sensor, building and group of buildings). We argue that this method can give us a better understanding of the uncertainties due to vegetation in the assessment of solar irradiation in urban environments, and therefore, the potential for the installation of solar energy systems.

  11. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. Comparison of UAV-Enabled Photogrammetry-Based 3D Point Clouds and Interpolated DSMs of Sloping Terrain for Rockfall Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Manousakis, J.; Zekkos, D.; Saroglou, F.; Clark, M.

    2016-10-01

    UAVs are expected to be particularly valuable to define topography for natural slopes that may be prone to geological hazards, such as landslides or rockfalls. UAV-enabled imagery and aerial mapping can lead to fast and accurate qualitative and quantitative results for photo documentation as well as basemap 3D analysis that can be used for geotechnical stability analyses. In this contribution, the case study of a rockfall near Ponti village that was triggered during the November 17th 2015 Mw 6.5 earthquake in Lefkada, Greece is presented with a focus on feature recognition and 3D terrain model development for use in rockfall hazard analysis. A significant advantage of the UAV was the ability to identify from aerial views the rockfall trajectory along the terrain, the accuracy of which is crucial to subsequent geotechnical back-analysis. Fast static GPS control points were measured for optimizing internal and external camera parameters and model georeferencing. Emphasis is given on an assessment of the error associated with the basemap when fewer and poorly distributed ground control points are available. Results indicate that spatial distribution and image occurrences of control points throughout the mapped area and image block is essential in order to produce accurate geospatial data with minimum distortions.

  13. Documenting a Complex Modern Heritage Building Using Multi Image Close Range Photogrammetry and 3d Laser Scanned Point Clouds

    NASA Astrophysics Data System (ADS)

    Vianna Baptista, M. L.

    2013-07-01

    Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  14. Disentangling the history of complex multi-phased shell beds based on the analysis of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2015-04-01

    Shell beds are key features in sedimentary records throughout the Phanerozoic. The interplay between burial rates and population productivity is reflected in distinct degrees of shelliness. Consequently, shell beds may provide informations on various physical processes, which led to the accumulation and preservation of hard parts. Many shell beds pass through a complex history of formation being shaped by more than one factor. In shallow marine settings, the composition of shell beds is often strongly influenced by winnowing, reworking and transport. These processes may cause considerable time averaging and the accumulation of specimens, which have lived thousands of years apart. In the best case, the environment remained stable during that time span and the mixing does not mask the overall composition. A major obstacle for the interpretation of shell beds, however, is the amalgamation of shell beds of several depositional units in a single concentration, as typically for tempestites and tsunamites. Disentangling such mixed assemblages requires deep understanding of the ecological requirements of the taxa involved - which is achievable for geologically young shell beds with living relatives - and a statistic approach to quantify the contribution by the various death assemblages. Furthermore it requires understanding of sedimentary processes potentially involved into their formation. Here we present the first attempt to describe and decipher such a multi-phase shell-bed based on a high resolution digital surface model (1 mm) combined with ortho-photos with a resolution of 0.5 mm per pixel. Documenting the oyster reef requires precisely georeferenced data; owing to high redundancy of the point cloud an accuracy of a few mm was achieved. The shell accumulation covers an area of 400 m2 with thousands of specimens, which were excavated by a three months campaign at Stetten in Lower Austria. Formed in an Early Miocene estuary of the Paratethys Sea it is mainly composed

  15. 3D Visualization of the Temporal and Spatial Spread of Tau Pathology Reveals Extensive Sites of Tau Accumulation Associated with Neuronal Loss and Recognition Memory Deficit in Aged Tau Transgenic Mice

    PubMed Central

    Fu, Hongjun; Hussaini, S. Abid; Wegmann, Susanne; Profaci, Caterina; Daniels, Jacob D.; Herman, Mathieu; Emrani, Sheina; Figueroa, Helen Y.; Hyman, Bradley T.; Davies, Peter; Duff, Karen E.

    2016-01-01

    3D volume imaging using iDISCO+ was applied to observe the spatial and temporal progression of tau pathology in deep structures of the brain of a mouse model that recapitulates the earliest stages of Alzheimer’s disease (AD). Tau pathology was compared at four timepoints, up to 34 months as it spread through the hippocampal formation and out into the neocortex along an anatomically connected route. Tau pathology was associated with significant gliosis. No evidence for uptake and accumulation of tau by glia was observed. Neuronal cells did appear to have internalized tau, including in extrahippocampal areas as a small proportion of cells that had accumulated human tau protein did not express detectible levels of human tau mRNA. At the oldest timepoint, mature tau pathology in the entorhinal cortex (EC) was associated with significant cell loss. As in human AD, mature tau pathology in the EC and the presence of tau pathology in the neocortex correlated with cognitive impairment. 3D volume imaging is an ideal technique to easily monitor the spread of pathology over time in models of disease progression. PMID:27466814

  16. The point-spread function of fiber-coupled area detectors

    PubMed Central

    Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.

    2012-01-01

    The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762

  17. Measurement of Phased Array Point Spread Functions for Use with Beamforming

    NASA Technical Reports Server (NTRS)

    Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis

    2011-01-01

    Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.

  18. Resolution below the point spread function for diffuse optical imaging using fluorescence lifetime multiplexing

    PubMed Central

    Rice, William L.; Hou, Steven; Kumar, Anand T. N.

    2014-01-01

    We show that asymptotic lifetime-based fluorescence tomography can localize multiple-lifetime targets separated well below the diffuse point spread function of a turbid medium. This is made possible due to a complete diagonalization of the time domain forward problem in the asymptotic limit. We also show that continuous wave or direct time gate approaches to fluorescence tomography are unable to achieve this separation, indicating the unique advantage of a decay-amplitude-based approach for tomographic lifetime multiplexing with time domain data. PMID:23938969

  19. 3-D stimulated emission depletion microscopy with programmable aberration correction.

    PubMed

    Lenz, Martin O; Sinclair, Hugo G; Savell, Alexander; Clegg, James H; Brown, Alice C N; Davis, Daniel M; Dunsby, Chris; Neil, Mark A A; French, Paul M W

    2014-01-01

    We present a stimulated emission depletion (STED) microscope that provides 3-D super resolution by simultaneous depletion using beams with both a helical phase profile for enhanced lateral resolution and an annular phase profile to enhance axial resolution. The 3-D depletion point spread function is realised using a single spatial light modulator that can also be programmed to compensate for aberrations in the microscope and the sample. We apply it to demonstrate the first 3-D super-resolved imaging of an immunological synapse between a Natural Killer cell and its target cell.

  20. Formation of Dirac point and the topological surface states inside the strained gap for mixed 3D Hg1-xCdx Te

    NASA Astrophysics Data System (ADS)

    Marchewka, Michał

    2016-10-01

    In this paper the results of the numerical calculation obtained for the three-dimensional (3D) strained Hg1-xCdx Te layers for the x-Cd composition from 0.1 to 0.155 and a different mismatch of the lattice constant are presented. For the investigated region of the Cd composition (x value) the negative energy gap (Eg =Γ8 -Γ6) in the Hg1-xCdx Te is smaller than in the case of pure HgTe which, as it turns out, has a significant influence on the topological surface states (TSS) and the position of the Dirac point. The numerical calculation based on the finite difference method applied for the 8×8 kp model with the in-plane tensile strain for (001) growth oriented structure shows that the Dirac cone inside the induced insulating band gap for non zero of the Cd composition and a bigger strain caused by the bigger lattice mismatch (than for the 3D HgTe TI) can be obtained. It was also shown how different x-Cd compounds move the Dirac cone from the valence band into the band gap. The presented results show that 75 nm wide 3D Hg1-xCdx Te structures with x ≈ 0.155 and 1.6% lattice mismatch make the system a true topological insulator with the dispersion of the topological surface states similar to those ones obtained for the strained CdTe/HgTe QW.

  1. Effects of point-spread function on calibration and radiometric accuracy of CCD camera.

    PubMed

    Du, Hong; Voss, Kenneth J

    2004-01-20

    The point-spread function (PSF) of a camera can seriously affect the accuracy of radiometric calibration and measurement. We found that the PSF can produce a 3.7% difference between the apparent measured radiance of two plaques of different sizes with the same illumination. This difference can be removed by deconvolution with the measured PSF. To determine the PSF, many images of a collimated beam from a He-Ne laser are averaged. Since our optical system is focused at infinity, it should focus this source to a single pixel. Although the measured PSF is very sharp, dropping 4 and 6 orders of magnitude and 8 and 100 pixels away from the point source, respectively, we show that the effect of the PSF as far as 100 pixels away cannot be ignored without introducing an appreciable error to the calibration. We believe that the PSF should be taken into account in all optical systems to obtain accurate radiometric measurements.

  2. The Effects of Instrumental Elliptical Polarization on Stellar Point Spread Function Fine Structure

    NASA Technical Reports Server (NTRS)

    Carson, Joseph C.; Kern, Brian D.; Breckinridge, James B.; Trauger, John T.

    2005-01-01

    We present procedures and preliminary results from a study on the effects of instrumental polarization on the fine structure of the stellar point spread function (PSF). These effects are important to understand because the the aberration caused by instrumental polarization on an otherwise diffraction-limited will likely have have severe consequences for extreme high contrast imaging systems such as NASA's planned Terrestrial Planet Finder (TPF) mission and the proposed NASA Eclipse mission. The report here, describing our efforts to examine these effects, includes two parts: 1) a numerical analysis of the effect of metallic reflection, with some polarization-specific retardation, on a spherical wavefront; 2) an experimental approach for observing this effect, along with some preliminary laboratory results. While the experimental phase of this study requires more fine-tuning to produce meaningful results, the numerical analysis indicates that the inclusion of polarization-specific phase effects (retardation) results in a point spread function (PSF) aberration more severe than the amplitude (reflectivity) effects previously recorded in the literature.

  3. Duality between the dynamics of line-like brushes of point defects in 2D and strings in 3D in liquid crystals

    NASA Astrophysics Data System (ADS)

    Digal, Sanatan; Ray, Rajarshi; Saumia, P. S.; Srivastava, Ajit M.

    2013-10-01

    We analyze the dynamics of dark brushes connecting point vortices of strength ±1 formed in the isotropic-nematic phase transition of a thin layer of nematic liquid crystals, using a crossed polarizer set up. The evolution of the brushes is seen to be remarkably similar to the evolution of line defects in a three-dimensional nematic liquid crystal system. Even phenomena like the intercommutativity of strings are routinely observed in the dynamics of brushes. We test the hypothesis of a duality between the two systems by determining exponents for the coarsening of total brush length with time as well as shrinking of the size of an isolated loop. Our results show scaling behavior for the brush length as well as the loop size with corresponding exponents in good agreement with the 3D case of string defects.

  4. Measurement and analysis of the point spread function with regard to straylight correction

    NASA Astrophysics Data System (ADS)

    Achatzi, Julian; Fischer, Gregor; Zimmer, Volker; Paulus, Dietrich; Bonnet, Gerhard

    2015-02-01

    Stray light is the part of an image that is formed by misdirected light. I.e. an ideal optic would map a point of the scene onto a point of the image. With real optics however, some parts of the light get misdirected. This is due to effects like scattering at edges, Fresnel reflections at optical surfaces, scattering at parts of the housing, scattering from dust and imperfections - on and inside of the lenses - and further reasons. These effects lead to errors in colour-measurements using spectral radiometers and other systems like scanners. Stray light is further limiting the dynamic range that can be achieved with High-Dynamic-Range-Technologies (HDR) and can lead to the rejection of cameras due to quality considerations. Therefore it is of interest, to measure, quantify and correct these effects. Our work aims at measuring the stray light point spread function (stray light PSF) of a system which is composed of a lens and an imaging sensor. In this paper we present a framework for the evaluation of PSF-models which can be used for the correction of straylight. We investigate if and how our evaluation framework can point out errors of these models and how these errors influence straylight correction.

  5. Equatorial Spread F Variability Investigations in Brazil: Preliminary Results from Conjugate Point Equatorial Experiments Campaign - COPEX

    NASA Astrophysics Data System (ADS)

    Abdu, M. A.; Batista, I. S.; Reinisch, B. W.; Souza, J. R.; Paula, E. R.; Sobral, J. H.; Bullett, T. W.

    2004-05-01

    Equatorial spread F variability can result from diverse conditions of the coupling processes that control the dynamic state of the ambient ionosphere-atmosphere system of the evening hours. While the sunset associated prereversal electric field enhancement (PRE) is known to be the most basic prerequisite for initiating ESF development, the intensity of an event seems to be controlled also by other factors, such as the symmetry/asymmetry of the ionization anomaly, flux tube integrated conductivities, and a possible (but largely unknown) perturbation source. An evaluation of the possible contributions from some of these factors to the observed ESF variability can be possible from measurements carried out over equatorial and conjugate points locations. A conjugate point equatorial observational campaign (COPEX) was conducted in Brazil during October to December 2002. The COPEX used digital ionosondes, all-sky imagers, GPS receivers, and other complementary instruments at the magnetic equatorial and conjugate point stations in the western longitude sector of Brazil. The campaign objective was to investigate the equatorial spread F/plasma bubble irregularity (ESF) generation conditions in terms of the ambient ionosphere-thermosphere properties along the magnetic flux tubes in which they occur. The COPEX digisonde observations permitted field line mapping of the conjugate E layers to dip equatorial F layer peak/bottomside. Other digisondes at eastern longitudes in Brazil complemented these measurements . Our results are based on the analysis of selected data sets, and we address the questions concerning: Trans-equatorial thermospheric winds and their effect on the ESF development; ESF variability under magnetospheric forcing through disturbance electric fields and winds; and the possible role of sporadic E layers on the ESF variability

  6. Updated point spread function simulations for JWST with WebbPSF

    NASA Astrophysics Data System (ADS)

    Perrin, Marshall D.; Sivaramakrishnan, Anand; Lajoie, Charles-Philippe; Elliott, Erin; Pueyo, Laurent; Ravindranath, Swara; Albert, Loïc.

    2014-08-01

    Accurate models of optical performance are an essential tool for astronomers, both for planning scientific observations ahead of time, and for a wide range of data analysis tasks such as point-spread-function (PSF)-fitting photometry and astrometry, deconvolution, and PSF subtraction. For the James Webb Space Telescope, the WebbPSF program provides a PSF simulation tool in a flexible and easy-to-use software package available to the community and implemented in Python. The latest version of WebbPSF adds new support for spectroscopic modes of JWST NIRISS, MIRI, and NIRSpec, including modeling of slit losses and diffractive line spread functions. It also provides additional options for modeling instrument defocus and/or pupil misalignments. The software infrastructure of WebbPSF has received enhancements including improved parallelization, an updated graphical interface, a better configuration system, and improved documentation. We also present several comparisons of WebbPSF simulated PSFs to observed PSFs obtained using JWST's flight science instruments during recent cryovac tests. Excellent agreement to first order is achieved for all imaging modes cross-checked thus far, including tests for NIRCam, FGS, NIRISS, and MIRI. These tests demonstrate that WebbPSF model PSFs have good fidelity to the key properties of JWST's as-built science instruments.

  7. Single shot three-dimensional imaging using an engineered point spread function.

    PubMed

    Berlich, René; Bräuer, Andreas; Stallinga, Sjoerd

    2016-03-21

    A system approach to acquire a three-dimensional object distribution is presented using a compact and cost efficient camera system with an engineered point spread function. The corresponding monocular setup incorporates a phase-only computer-generated hologram in combination with a conventional imaging objective in order to optically encode the axial information within a single two-dimensional image. The object's depth map is calculated using a novel approach based on the power cepstrum of the image. The in-plane RGB image information is restored with an extended depth of focus by applying an adapted Wiener filter. The presented approach is tested experimentally by estimating the three-dimensional distribution of an extended passively illuminated scene. PMID:27136790

  8. Effects of time response on the point spread function of a scanning radiometer.

    PubMed

    Smith, G L

    1994-10-20

    Scanning radiometers on satellites have a finite response time, because of the detector and the associated electronics. The radiometer measurement as it scans over a point source of radiation of unit strength is the point spread function (PSF). The time response causes a widening and skewing of the PSF. The PSF of a scanning radiometer that has well-focused optics together with time responses for the detector and electronic filter is treated in the time domain. The PSF can be expressed in terms of the system time response to a step input. For a first-order system time response, the displacement of the centroid is the product of the system time constant and the scan rate of the radiometer. The electronic filter further displaces the centroid of the PSF by the product of the scan rate and the filter time constant. Also, the width of the PSF in the scan direction will be increased because of the system time response. The minimum resolvable feature is of the order of the width of the PSF, thus the system time response limits the resolution in the scan direction that can be obtained. The analysis is illustrated by applying it to the Clouds and Earth Radiant Energy System experiment scanning radiometer.

  9. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    SciTech Connect

    Marois, C; Lafreniere, D; Macintosh, B; Doyon, R

    2006-02-07

    For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.

  10. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  11. The point spread function of the human head and its implications for transcranial current stimulation

    NASA Astrophysics Data System (ADS)

    Dmochowski, Jacek P.; Bikson, Marom; Parra, Lucas C.

    2012-10-01

    Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode-cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown.

  12. Point spread function modeling and image restoration for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe

    2015-03-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)

  13. HST/WFC3 UVIS Detector: Dark, Charge Transfer Efficiency, and Point Spread Function Calibrations

    NASA Astrophysics Data System (ADS)

    Bourque, Matthew; Anderson, Jay; Baggett, Sylvia; Bowers, Ariel; MacKenty, John W.; Sahu, Kailash C.

    2015-08-01

    Wide Field Camera 3 (WFC3) is a fourth-generation imaging instrument on board the Hubble Space Telescope (HST) that was installed during Servicing Mission 4 in May 2009. As one of two channels available on WFC3, the UVIS detector is comprised of two e2v CCDs and is sensitive to ultraviolet and visible light. Here we provide updates to the characterization and monitoring of the UVIS performance and stability. We present the long-term growth of the dark current and the hot pixel population, as well as the evolution of Charge Transfer Efficiency (CTE). We also discuss updates to the UVIS dark calibration products, which are used to correct for dark current in science images. We examine the impacts of CTE losses and outline some techniques to mitigate CTE effects during and after observation by use of post-flash and pixel-based CTE corrections. Finally, we summarize an investigation of WFC3/UVIS Point Spread Functions (PSFs) and their potential use for characterizing the focus of the instrument.

  14. Multipath exploitation in through-wall radar imaging via point spread functions.

    PubMed

    Setlur, Pawan; Alli, Giovanni; Nuzzo, Luigia

    2013-12-01

    Due to several sources of multipath in through-wall radar sensing, such as walls, floors, and ceilings, there could exist multipath ghosts associated with a few genuine targets in the synthetic aperture beamformed image. The multipath ghosts are false positives and therefore confusable with genuine targets. Here, we develop a multipath exploitation technique using point spread functions, which associate and map back the multipath ghosts to their genuine targets, thereby increasing the effective signal-to-clutter ratio (SCR) at the genuine target locations. To do so, we first develop a multipath model advocating the Householder transformation, which permits modeling multiple reflections at multiple walls, and also allows for unconventional room/building geometries. Second, closed-form solutions of the multipath ghost locations assuming free space propagation are derived. Third, a nonlinear least squares optimization is formulated and initialized with these free space solutions to localize the multipath ghosts in through-wall radar sensing. The exploitation approach is general and does not require a priori assumptions on the number of targets. The free space multipath ghost locations and exploitation technique derived here may be used as is for multipath exploitation in urban canyons via synthetic aperture radar. Analytical expressions quantifying the SCR gain after multipath exploitation are derived. The analysis is validated with experimental EM results using finite-difference time-domain simulations.

  15. Point spread functions for earthquake source imaging: an interpretation based on seismic interferometry

    NASA Astrophysics Data System (ADS)

    Nakahara, Hisashi; Haney, Matthew M.

    2015-07-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artefacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green's functions. In particular, the PSF can be related to Green's function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  16. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  17. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.

  18. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  19. Image adaptive point-spread function estimation and deconvolution for in vivo confocal microscopy.

    PubMed

    Von Tiedemann, M; Fridberger, A; Ulfendahl, M; Tomo, I; Boutet de Monvel, J; De Monvel, J Boutet

    2006-01-01

    Visualizing deep inside the tissue of a thick biological sample often poses severe constraints on image conditions. Standard restoration techniques (denoising and deconvolution) can then be very useful, allowing one to increase the signal-to-noise ratio and the resolution of the images. In this paper, we consider the problem of obtaining a good determination of the point-spread function (PSF) of a confocal microscope, a prerequisite for applying deconvolution to three-dimensional image stacks acquired with this system. Because of scattering and optical distortion induced by the sample, the PSF has to be acquired anew for each experiment. To tackle this problem, we used a screening approach to estimate the PSF adaptively and automatically from the images. Small PSF-like structures were detected in the images, and a theoretical PSF model reshaped to match the geometric characteristics of these structures. We used numerical experiments to quantify the sensitivity of our detection method, and we demonstrated its usefulness by deconvolving images of the hearing organ acquired in vitro and in vivo.

  20. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology. PMID:25517515

  1. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  2. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  3. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  4. Correction for collimator-detector response in SPECT using point spread function template.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A; Dewaraja, Yuni K

    2013-02-01

    Compensating for the collimator-detector response (CDR) in SPECT is important for accurate quantification. The CDR consists of both a geometric response and a septal penetration and collimator scatter response. The geometric response can be modeled analytically and is often used for modeling the whole CDR if the geometric response dominates. However, for radionuclides that emit medium or high-energy photons such as I-131, the septal penetration and collimator scatter response is significant and its modeling in the CDR correction is important for accurate quantification. There are two main methods for modeling the depth-dependent CDR so as to include both the geometric response and the septal penetration and collimator scatter response. One is to fit a Gaussian plus exponential function that is rotationally invariant to the measured point source response at several source-detector distances. However, a rotationally-invariant exponential function cannot represent the star-shaped septal penetration tails in detail. Another is to perform Monte-Carlo (MC) simulations to generate the depth-dependent point spread functions (PSFs) for all necessary distances. However, MC simulations, which require careful modeling of the SPECT detector components, can be challenging and accurate results may not be available for all of the different SPECT scanners in clinics. In this paper, we propose an alternative approach to CDR modeling. We use a Gaussian function plus a 2-D B-spline PSF template and fit the model to measurements of an I-131 point source at several distances. The proposed PSF-template-based approach is nearly non-parametric, captures the characteristics of the septal penetration tails, and minimizes the difference between the fitted and measured CDR at the distances of interest. The new model is applied to I-131 SPECT reconstructions of experimental phantom measurements, a patient study, and a MC patient simulation study employing the XCAT phantom. The proposed model

  5. The Effect of Point-spread Function Interaction with Radiance from Heterogeneous Scenes on Multitemporal Signature Analysis. [soybean stress

    NASA Technical Reports Server (NTRS)

    Duggin, M. J.; Schoch, L. B.

    1984-01-01

    The point-spread function is an important factor in determining the nature of feature types on the basis of multispectral recorded radiance, particularly from heterogeneous scenes and particularly from scenes which are imaged repetitively, in order to provide thematic characterization by means of multitemporal signature. To demonstrate the effect of the interaction of scene heterogeneity with the point spread function (PSF)1, a template was constructed from the line spread function (LSF) data for the thematic mapper photoflight model. The template was in 0.25 (nominal) pixel increments in the scan line direction across three scenes of different heterogeneity. The sensor output was calculated by considering the calculated scene radiance from each scene element occurring between the contours of the PSF template, plotted on a movable mylar sheet while it was located at a given position.

  6. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  7. X-ray optical systems: from metrology to Point Spread Function

    NASA Astrophysics Data System (ADS)

    Spiga, Daniele; Raimondi, Lorenzo

    2014-09-01

    One of the problems often encountered in X-ray mirror manufacturing is setting proper manufacturing tolerances to guarantee an angular resolution - often expressed in terms of Point Spread Function (PSF) - as needed by the specific science goal. To do this, we need an accurate metrological apparatus, covering a very broad range of spatial frequencies, and an affordable method to compute the PSF from the metrology dataset. In the past years, a wealth of methods, based on either geometrical optics or the perturbation theory in smooth surface limit, have been proposed to respectively treat long-period profile errors or high-frequency surface roughness. However, the separation between these spectral ranges is difficult do define exactly, and it is also unclear how to affordably combine the PSFs, computed with different methods in different spectral ranges, into a PSF expectation at a given X-ray energy. For this reason, we have proposed a method entirely based on the Huygens-Fresnel principle to compute the diffracted field of real Wolter-I optics, including measured defects over a wide range of spatial frequencies. Owing to the shallow angles at play, the computation can be simplified limiting the computation to the longitudinal profiles, neglecting completely the effect of roundness errors. Other authors had already proposed similar approaches in the past, but only in far-field approximation, therefore they could not be applied to the case of Wolter-I optics, in which two reflections occur in sequence within a short range. The method we suggest is versatile, as it can be applied to multiple reflection systems, at any X-ray energy, and regardless of the nominal shape of the mirrors in the optical system. The method has been implemented in the WISE code, successfully used to explain the measured PSFs of multilayer-coated optics for astronomic use, and of a K-B optical system in use at the FERMI free electron laser.

  8. Characterization of LBNL SNAP CCD's: Quantum efficiency, reflectivity, and point-spread function

    NASA Astrophysics Data System (ADS)

    Groom, Donald E.; Bebek, C. J.; Fabricius, M.; Fairfield, J. A.; Karcher, A.; Kolbe, W. F.; Roe, N. A.; Steckert, J.

    2006-12-01

    A Quantum Efficiency Machine has been developed at Lawrence Berkeley Lab to measure the quantum efficiency (QE) of the novel thick CCD's planned for use in the Supernova/Acceleration Probe (SNAP) mission. It is conventional, but with significant innovations. The most important of these is that the reference photodiode (PD) is coplanar with the cold CCD inside the dewar. The PD is on a separate heat sink regulated to the PD calibration temperature. The effects of geometry and reflections from the dewar window are eliminated, and since the PD and the CCD are observed simultaneously, light intensity regulation is not an issue. A ``dark box'' provides space between the exit port of the integrating sphere and the CCD dewar, ensuring nearly uniform illumination. It also provides a home for a reflectometer and spot projector, both of which are fed by the alternate beam of the monochromator. The measurement of reflectivity (R) is essential for corroborating the QE measurements, since QE < 1-R everywhere, and QE = 1-R over much of the spectral region. In our reflectometer the light monitor and the CCD carriage are both moved so that no extra mirrors are introduced. The intrinsic point-spread function (PSF) of a CCD is limited by transverse diffusion of the charge carriers as they drift to the potential wells, driven by the electric field produced by the substrate bias potential---hence a bias voltage that is normally several times that needed for total depletion. A precision spot projector is installed in the dark box for the measurements. A PSF rms width of 3.7 pm 0.2 um is obtained for the 200 um thick SNAP CCD's biased at 115 V, thus meeting the SNAP design goals. The result agrees with simple theory once the electric field dependence of carrier mobility is taken into account.

  9. Study of the point spread function (PSF) for 123I SPECT imaging using Monte Carlo simulation.

    PubMed

    Cot, A; Sempau, J; Pareto, D; Bullich, S; Pavía, J; Calviño, F; Ros, D

    2004-07-21

    The iterative reconstruction algorithms employed in brain single-photon emission computed tomography (SPECT) allow some quantitative parameters of the image to be improved. These algorithms require accurate modelling of the so-called point spread function (PSF). Nowadays, most in vivo neurotransmitter SPECT studies employ pharmaceuticals radiolabelled with 123I. In addition to an intense line at 159 keV, the decay scheme of this radioisotope includes some higher energy gammas which may have a non-negligible contribution to the PSF. The aim of this work is to study this contribution for two low-energy high-resolution collimator configurations, namely, the parallel and the fan beam. The transport of radiation through the material system is simulated with the Monte Carlo code PENELOPE. We have developed a main program that deals with the intricacies associated with tracking photon trajectories through the geometry of the collimator and detection systems. The simulated PSFs are partly validated with a set of experimental measurements that use the 511 keV annihilation photons emitted by a 18F source. Sensitivity and spatial resolution have been studied, showing that a significant fraction of the detection events in the energy window centred at 159 keV (up to approximately 49% for the parallel collimator) are originated by higher energy gamma rays, which contribute to the spatial profile of the PSF mostly outside the 'geometrical' region dominated by the low-energy photons. Therefore, these high-energy counts are to be considered as noise, a fact that should be taken into account when modelling PSFs for reconstruction algorithms. We also show that the fan beam collimator gives higher signal-to-noise ratios than the parallel collimator for all the source positions analysed.

  10. Study of the point spread function (PSF) for 123I SPECT imaging using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Cot, A.; Sempau, J.; Pareto, D.; Bullich, S.; Pavía, J.; Calviño, F.; Ros, D.

    2004-07-01

    The iterative reconstruction algorithms employed in brain single-photon emission computed tomography (SPECT) allow some quantitative parameters of the image to be improved. These algorithms require accurate modelling of the so-called point spread function (PSF). Nowadays, most in vivo neurotransmitter SPECT studies employ pharmaceuticals radiolabelled with 123I. In addition to an intense line at 159 keV, the decay scheme of this radioisotope includes some higher energy gammas which may have a non-negligible contribution to the PSF. The aim of this work is to study this contribution for two low-energy high-resolution collimator configurations, namely, the parallel and the fan beam. The transport of radiation through the material system is simulated with the Monte Carlo code PENELOPE. We have developed a main program that deals with the intricacies associated with tracking photon trajectories through the geometry of the collimator and detection systems. The simulated PSFs are partly validated with a set of experimental measurements that use the 511 keV annihilation photons emitted by a 18F source. Sensitivity and spatial resolution have been studied, showing that a significant fraction of the detection events in the energy window centred at 159 keV (up to approximately 49% for the parallel collimator) are originated by higher energy gamma rays, which contribute to the spatial profile of the PSF mostly outside the 'geometrical' region dominated by the low-energy photons. Therefore, these high-energy counts are to be considered as noise, a fact that should be taken into account when modelling PSFs for reconstruction algorithms. We also show that the fan beam collimator gives higher signal-to-noise ratios than the parallel collimator for all the source positions analysed.

  11. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  12. 3-D laser radar simulation for autonomous spacecraft landing

    NASA Technical Reports Server (NTRS)

    Reiley, Michael F.; Carmer, Dwayne C.; Pont, W. F.

    1991-01-01

    A sophisticated 3D laser radar sensor simulation, developed and applied to the task of autonomous hazard detection and avoidance, is presented. This simulation includes a backward ray trace to sensor subpixels, incoherent subpixel integration, range dependent noise, sensor point spread function effects, digitization noise, and AM-CW modulation. Specific sensor parameters, spacecraft lander trajectory, and terrain type have been selected to generate simulated sensor data.

  13. Impact of device size and thickness of Al2O 3 film on the Cu pillar and resistive switching characteristics for 3D cross-point memory application.

    PubMed

    Panja, Rajeswar; Roy, Sourav; Jana, Debanjan; Maikap, Siddheswar

    2014-12-01

    Impact of the device size and thickness of Al2O3 film on the Cu pillars and resistive switching memory characteristics of the Al/Cu/Al2O3/TiN structures have been investigated for the first time. The memory device size and thickness of Al2O3 of 18 nm are observed by transmission electron microscope image. The 20-nm-thick Al2O3 films have been used for the Cu pillar formation (i.e., stronger Cu filaments) in the Al/Cu/Al2O3/TiN structures, which can be used for three-dimensional (3D) cross-point architecture as reported previously Nanoscale Res. Lett.9:366, 2014. Fifty randomly picked devices with sizes ranging from 8 × 8 to 0.4 × 0.4 μm(2) have been measured. The 8-μm devices show 100% yield of Cu pillars, whereas only 74% successful is observed for the 0.4-μm devices, because smaller size devices have higher Joule heating effect and larger size devices show long read endurance of 10(5) cycles at a high read voltage of -1.5 V. On the other hand, the resistive switching memory characteristics of the 0.4-μm devices with a 2-nm-thick Al2O3 film show superior as compared to those of both the larger device sizes and thicker (10 nm) Al2O3 film, owing to higher Cu diffusion rate for the larger size and thicker Al2O3 film. In consequence, higher device-to-device uniformity of 88% and lower average RESET current of approximately 328 μA are observed for the 0.4-μm devices with a 2-nm-thick Al2O3 film. Data retention capability of our memory device of >48 h makes it a promising one for future nanoscale nonvolatile application. This conductive bridging resistive random access memory (CBRAM) device is forming free at a current compliance (CC) of 30 μA (even at a lowest CC of 0.1 μA) and operation voltage of ±3 V at a high resistance ratio of >10(4). PMID:26088986

  14. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  15. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. An effective method to verify line and point spread functions measured in computed tomography

    SciTech Connect

    Ohkubo, Masaki; Wada, Sinichi; Matsumoto, Toru; Nishizawa, Kanae

    2006-08-15

    This study describes an effective method for verifying line spread function (LSF) and point spread function (PSF) measured in computed tomography (CT). The CT image of an assumed object function is known to be calculable using LSF or PSF based on a model for the spatial resolution in a linear imaging system. Therefore, the validities of LSF and PSF would be confirmed by comparing the computed images with the images obtained by scanning phantoms corresponding to the object function. Differences between computed and measured images will depend on the accuracy of the LSF and PSF used in the calculations. First, we measured LSF in our scanner, and derived the two-dimensional PSF in the scan plane from the LSF. Second, we scanned the phantom including uniform cylindrical objects parallel to the long axis of a patient's body (z direction). Measured images of such a phantom were characterized according to the spatial resolution in the scan plane, and did not depend on the spatial resolution in the z direction. Third, images were calculated by two-dimensionally convolving the true object as a function of space with the PSF. As a result of comparing computed images with measured ones, good agreement was found and was demonstrated by image subtraction. As a criterion for evaluating quantitatively the overall differences of images, we defined the normalized standard deviation (SD) in the differences between computed and measured images. These normalized SDs were less than 5.0% (ranging from 1.3% to 4.8%) for three types of image reconstruction kernels and for various diameters of cylindrical objects, indicating the high accuracy of PSF and LSF that resulted in successful measurements. Further, we also obtained another LSF utilizing an inappropriate manner, and calculated the images as above. This time, the computed images did not agree with the measured ones. The normalized SDs were 6.0% or more (ranging from 6.0% to 13.8%), indicating the inaccuracy of the PSF and LSF. We

  17. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  18. Invariant joint distribution of a stationary random field and its derivatives: Euler characteristic and critical point counts in 2 and 3D

    SciTech Connect

    Pogosyan, Dmitry; Gay, Christophe; Pichon, Christophe

    2009-10-15

    The full moments expansion of the joint probability distribution of an isotropic random field, its gradient, and invariants of the Hessian are presented in 2 and 3D. It allows for explicit expression for the Euler characteristic in ND and computation of extrema counts as functions of the excursion set threshold and the spectral parameter, as illustrated on model examples.

  19. Novel 3D light microscopic analysis of IUGR placentas points to a morphological correlate of compensated ischemic placental disease in humans

    PubMed Central

    Haeussner, Eva; Schmitz, Christoph; Frank, Hans-Georg; Edler von Koch, Franz

    2016-01-01

    The villous tree of the human placenta is a complex three-dimensional (3D) structure with branches and nodes at the feto-maternal border in the key area of gas and nutrient exchange. Recently we introduced a novel, computer-assisted 3D light microscopic method that enables 3D topological analysis of branching patterns of the human placental villous tree. In the present study we applied this novel method to the 3D architecture of peripheral villous trees of placentas from patients with intrauterine growth retardation (IUGR placentas), a severe obstetric syndrome. We found that the mean branching angle of branches in terminal positions of the villous trees was significantly different statistically between IUGR placentas and clinically normal placentas. Furthermore, the mean tortuosity of branches of villous trees in directly preterminal positions was significantly different statistically between IUGR placentas and clinically normal placentas. We show that these differences can be interpreted as consequences of morphological adaptation of villous trees between IUGR placentas and clinically normal placentas, and may have important consequences for the understanding of the morphological correlates of the efficiency of the placental villous tree and their influence on fetal development. PMID:27045698

  20. Point-spread function associated with underwater imaging through a wavy air-water interface: theory and laboratory tank experiment.

    PubMed

    Brown, W C; Majumdar, A K

    1992-12-20

    The point-spread function needed for imaging underwater objects is theoretically derived and compared with experimental results. The theoretical development is based on the emergent-ray model, in which the Gram-Charlier series for the non-Gaussian probability-density function for emergent angles through a wavy water surface was assumed. To arrive at the point-spread model, we used a finite-element methodology with emergent-ray angular probability distributions as fundamental building functions. The model is in good agreement with the experiment for downwind conditions. A slight deviation between theory and experiment was observed for the crosswind case; this deviation may be caused by the possible interaction of standing waves with the original air-ruffled capillary waves that were not taken into account in the model.

  1. The three-dimensional point spread function of aberration-corrected scanning transmission electron microscopy.

    PubMed

    Lupini, Andrew R; de Jonge, Niels

    2011-10-01

    Aberration correction reduces the depth of field in scanning transmission electron microscopy (STEM) and thus allows three-dimensional (3D) imaging by depth sectioning. This imaging mode offers the potential for sub-Ångstrom lateral resolution and nanometer-scale depth sensitivity. For biological samples, which may be many microns across and where high lateral resolution may not always be needed, optimizing the depth resolution even at the expense of lateral resolution may be desired, aiming to image through thick specimens. Although there has been extensive work examining and optimizing the probe formation in two dimensions, there is less known about the probe shape along the optical axis. Here the probe shape is examined in three dimensions in an attempt to better understand the depth resolution in this mode. Examples are presented of how aberrations change the probe shape in three dimensions, and it is found that off-axial aberrations may need to be considered for focal series of large areas. It is shown that oversized or annular apertures theoretically improve the vertical resolution for 3D imaging of nanoparticles. When imaging nanoparticles of several nanometer size, regular STEM can thereby be optimized such that the vertical full-width at half-maximum approaches that of the aberration-corrected STEM with a standard aperture.

  2. The spreading of a granular column from a Bingham point of view

    NASA Astrophysics Data System (ADS)

    Josserand, C.; Lagrée, P.-Y.; Lhuillier, D.; Popinet, S.; Ray, P.; Staron, L.

    2009-06-01

    The collapse and spreading of granular columns has been the subject of sustained interest in the last years from both mechanical and geophysical communities. Yet, in spite of this intensive research, the adequate rheology allowing for a reliable continuum modeling of the dynamics of granular column collapse is still open to discussion. Essentially, continuum models rely on shallow-water approximation for which dissipation and sedimentation processes are taken into account through the introduction of ad hoc laws. However, the rheological origin of the experimental scaling laws exhibited by the granular columns when spreading remains unclear. On these grounds, we adopt an alternative approach consisting of studying the collapse of columns of material obeying a Bingham rheology. Therefore we carried out series of numerical simulations using the Gerris Flow Solver solving the time dependent incompressible Navier-Stokes equation in two dimensions for the specified rheology. We first check that the mass exhibit similar scaling laws as those shown by granular columns. Then we investigate in which extent rheological parameters do reflect on these scaling laws. A comparative analysis of Bingham and granular flow characteristics ensues.

  3. Point-spread function of the ocean color bands of the Moderate Resolution Imaging Spectroradiometer on Aqua.

    PubMed

    Meister, Gerhard; McClain, Charles R

    2010-11-10

    The Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua platform has nine spectral bands with center wavelengths from 412 to 870 nm that are used to produce the standard ocean color data products. Ocean scenes usually contain high contrast due to the presence of bright clouds over dark water. About half of the MODIS Aqua ocean pixels are flagged as spatial stray light contaminated. The MODIS has been characterized for stray light effects prelaunch. In this paper, we derive point-spread functions for the MODIS Aqua ocean bands based on prelaunch line-spread function measurements. The stray light contamination of ocean scenes is evaluated based on artificial test scenes and on-orbit data.

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Monte Carlo simulation of light scattering in the atmosphere and effect of atmospheric aerosols on the point spread function.

    PubMed

    Colombi, Joshua; Louedec, Karim

    2013-11-01

    We present a Monte Carlo simulation for the scattering of light in the case of an isotropic light source. The scattering phase functions are studied particularly in detail to understand how they can affect the multiple light scattering in the atmosphere. We show that, although aerosols are usually in lower density than molecules in the atmosphere, they can have a non-negligible effect on the atmospheric point spread function. This effect is especially expected for ground-based detectors when large aerosols are present in the atmosphere.

  6. Extending depth of field for hybrid imaging systems via the use of both dark and dot point spread functions.

    PubMed

    Nhu, L V; Fan, Zhigang; Chen, Shouqian; Dang, Fanyang

    2016-09-10

    In this paper, we propose one method based on the use of both dark and dot point spread functions (PSFs) to extend depth of field in hybrid imaging systems. Two different phase modulations of two phase masks are used to generate both dark and dot PSFs. The quartic phase mask (QPM) is used to generate the dot PSF. A combined phase mask between the QPM and the angle for generating the dark PSF is investigated. The simulation images show that the proposed method can produce superior imaging performance of hybrid imaging systems in extending the depth of field. PMID:27661372

  7. Extending depth of field for hybrid imaging systems via the use of both dark and dot point spread functions.

    PubMed

    Nhu, L V; Fan, Zhigang; Chen, Shouqian; Dang, Fanyang

    2016-09-10

    In this paper, we propose one method based on the use of both dark and dot point spread functions (PSFs) to extend depth of field in hybrid imaging systems. Two different phase modulations of two phase masks are used to generate both dark and dot PSFs. The quartic phase mask (QPM) is used to generate the dot PSF. A combined phase mask between the QPM and the angle for generating the dark PSF is investigated. The simulation images show that the proposed method can produce superior imaging performance of hybrid imaging systems in extending the depth of field.

  8. 3D Modeling By Consolidation Of Independent Geometries Extracted From Point Clouds - The Case Of The Modeling Of The Turckheim's Chapel (Alsace, France)

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Fabre, Ph.; Schlussel, B.

    2014-06-01

    Turckheim is a small town located in Alsace, north-east of France. In the heart of the Alsatian vineyard, this city has many historical monuments including its old church. To understand the effectiveness of the project described in this paper, it is important to have a look at the history of this church. Indeed there are many historical events that explain its renovation and even its partial reconstruction. The first mention of a christian sanctuary in Turckheim dates back to 898. It will be replaced in the 12th century by a roman church (chapel), which subsists today as the bell tower. Touched by a lightning in 1661, the tower then was enhanced. In 1736, it was repaired following damage sustained in a tornado. In 1791, the town installs an organ to the church. Last milestone, the church is destroyed by fire in 1978. The organ, like the heart of the church will then have to be again restored (1983) with a simplified architecture. From this heavy and rich past, it unfortunately and as it is often the case, remains only very few documents and information available apart from facts stated in some sporadic writings. And with regard to the geometry, the positioning, the physical characteristics of the initial building, there are very little indication. Some assumptions of positions and right-of-way were well issued by different historians or archaeologists. The acquisition and 3D modeling project must therefore provide the current state of the edifice to serve as the basis of new investigations and for the generation of new hypotheses on the locations and historical shapes of this church and its original chapel (Fig. 1)

  9. Laser guide star adaptive optics point spread function reconstruction project at W. M. Keck Observatory: preliminary on-sky results

    NASA Astrophysics Data System (ADS)

    Jolissaint, Laurent; Ragland, Sam; Wizinowich, Peter; Bouxin, Audrey

    2014-07-01

    We present in this paper an analysis of our preliminary results for point spread function reconstruction in laser guide star (LGS) mode for the Keck-II adaptive optics system. Our approach is based on an update of the natural guide star algorithm with the LGS terms. The first reconstruction we have done is based on a set of 13 LGS runs (telemetry data and sky PSF) for which we demonstrate already a significant correlation between the reconstructed and sky PSF metrics. At this point of the project, though, our reconstructed PSF does not reproduce the sky PSF features (and this is expected) : we discuss why, and describe the different issues we have to solve, and the different experiment we will do, in order to achieve a good reconstruction.

  10. Imaging performance of annular apertures. IV - Apodization and point spread functions. V - Total and partial energy integral functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1983-01-01

    Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.

  11. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in

  12. B4 2 After, 3D Deformation Field From Matching Pre- To Post-Event Aerial LiDAR Point Clouds, The 2010 El Mayor-Cucapah M7.2 Earthquake Case

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Limon-Tirado, J. F.; Arrowsmith, R.; Krishnan, A.; Saripalli, S.; Oskin, M. E.; Glennie, C. L.; Arregui, S. M.; Fletcher, J. M.; Teran, O. J.

    2013-05-01

    Aerial LiDAR surveys reconstruct with amazing fidelity the sinuosity of terrain relief. In this research we explore the 3D deformation field at the surface after a big earthquake (M7.2) by comparing pre- to post-event aerial LiDAR point clouds. The April 4 2010 earthquake produced a NW-SE surface rupture ~110km long with right-lateral normal slip up to 3m in magnitude over a very favorable target: scarcely vegetated and unaltered desert mountain range, sierras El Mayor and Cucapah, in northern Baja California, close to the US-México border. It is a plate boundary region between the Pacific and North American plates. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3D surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising translations and rotations) that best aligns the pre- to post-event points. Perturbing the pre- and post-event point clouds independently with a synthetic right lateral inverse displacements of known magnitude along a proposed fault, ICP recovered the synthetically introduced translations. Windows with dimensions of 100-200m gave the best results for datasets with these densities. The simplified surface rupture photo interpreted and mapped in the field, delineates very well the vertical displacements patterns unveiled by ICP. The method revealed block rotations, some with clockwise and others counter clockwise direction along the simplified surface rupture. As ground truth, displacements from ICP have similar values as those measured in the field along the main rupture by Fletcher and collaborators. The vertical component was better estimated than the

  13. Experimental and numerical investigation of feed-point parameters in a 3-D hyperthermia applicator using different FDTD models of feed networks.

    PubMed

    Nadobny, Jacek; Fähling, Horst; Hagmann, Mark J; Turner, Paul F; Wlodarczyk, Waldemar; Gellermann, Johanna M; Deuflhard, Peter; Wust, Peter

    2002-11-01

    Experimental and numerical methods were used to determine the coupling of energy in a multichannel three-dimensional hyperthermia applicator (SIGMA-Eye), consisting of 12 short dipole antenna pairs with stubs for impedance matching. The relationship between the amplitudes and phases of the forward waves from the amplifiers, to the resulting amplitudes and phases at the antenna feed-points was determined in terms of interaction matrices. Three measuring methods were used: 1) a differential probe soldered directly at the antenna feed-points; 2) an E-field sensor placed near the feed-points; and 3) measurements were made at the outputs of the amplifier. The measured data were compared with finite-difference time-domain (FDTD) calculations made with three different models. The first model assumes that single antennas are fed independently. The second model simulates antenna pairs connected to the transmission lines. The measured data correlate best with the latter FDTD model, resulting in an improvement of more than 20% and 20 degrees (average difference in amplitudes and phases) when compared with the two simpler FDTD models.

  14. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  15. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x

  16. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  17. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  18. Estimation of spreading fire geometrical characteristics using near infrared stereovision

    NASA Astrophysics Data System (ADS)

    Rossi, L.; Toulouse, T.; Akhloufi, M.; Pieri, A.; Tison, Y.

    2013-03-01

    In fire research and forest firefighting, there is a need of robust metrological systems able to estimate the geometrical characteristics of outdoor spreading fires. In recent years, we assist to an increased interest in wildfire research to develop non destructive techniques based on computer vision. This paper presents a new approach for the estimation of fire geometrical characteristics using near infrared stereovision. Spreading fire information like position, rate of spread, height and surface, are estimated from the computed 3D fire points. The proposed system permits to track fire spreading on a ground area of 5mx10m. Keywords: near infrared, stereovision, spreading fire, geometrical characteristics

  19. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  20. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  1. Photon efficient double-helix PSF microscopy with application to 3D photo-activation localization imaging

    PubMed Central

    Grover, Ginni; Quirin, Sean; Fiedler, Callie; Piestun, Rafael

    2011-01-01

    We present a double-helix point spread function (DH-PSF) based three-dimensional (3D) microscope with efficient photon collection using a phase mask fabricated by gray-level lithography. The system using the phase mask more than doubles the efficiency of current liquid crystal spatial light modulator implementations. We demonstrate the phase mask DH-PSF microscope for 3D photo-activation localization microscopy (PM-DH-PALM) over an extended axial range. PMID:22076263

  2. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  3. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.

  4. Measuring a charge-coupled device point spread function. Euclid visible instrument CCD273-84 PSF performance

    NASA Astrophysics Data System (ADS)

    Niemi, Sami-Matias; Cropper, Mark; Szafraniec, Magdalena; Kitching, Thomas

    2015-06-01

    In this paper we present the testing of a back-illuminated development Euclid Visible Instrument (VIS) Charge-Coupled Device (CCD) to measure the intrinsic CCD Point Spread Function (PSF) characteristics using a novel modelling technique. We model the optical spot projection system and the CCD273-84 PSF jointly. We fit a model using Bayesian posterior probability density function, sampling to all available data simultaneously. The generative model fitting is shown, using simulated data, to allow good parameter estimations even when these data are not well sampled. Using available spot data we characterise a CCD273-84 PSF as a function of wavelength and intensity. The CCD PSF kernel size was found to increase with increasing intensity and decreasing wavelength.

  5. Extracting full-field dynamic strain on a wind turbine rotor subjected to arbitrary excitations using 3D point tracking and a modal expansion technique

    NASA Astrophysics Data System (ADS)

    Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter

    2015-09-01

    Health monitoring of rotating structures such as wind turbines and helicopter rotors is generally performed using conventional sensors that provide a limited set of data at discrete locations near or on the hub. These sensors usually provide no data on the blades or inside them where failures might occur. Within this paper, an approach was used to extract the full-field dynamic strain on a wind turbine assembly subject to arbitrary loading conditions. A three-bladed wind turbine having 2.3-m long blades was placed in a semi-built-in boundary condition using a hub, a machining chuck, and a steel block. For three different test cases, the turbine was excited using (1) pluck testing, (2) random impacts on blades with three impact hammers, and (3) random excitation by a mechanical shaker. The response of the structure to the excitations was measured using three-dimensional point tracking. A pair of high-speed cameras was used to measure displacement of optical targets on the structure when the blades were vibrating. The measured displacements at discrete locations were expanded and applied to the finite element model of the structure to extract the full-field dynamic strain. The results of the paper show an excellent correlation between the strain predicted using the proposed approach and the strain measured with strain-gages for each of the three loading conditions. The approach used in this paper to predict the strain showed higher accuracy than the digital image correlation technique. The new expansion approach is able to extract dynamic strain all over the entire structure, even inside the structure beyond the line of sight of the measurement system. Because the method is based on a non-contacting measurement approach, it can be readily applied to a variety of structures having different boundary and operating conditions, including rotating blades.

  6. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  7. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  8. Intratumoral spread of wild-type adenovirus is limited after local injection of human xenograft tumors: virus persists and spreads systemically at late time points.

    PubMed

    Sauthoff, Harald; Hu, Jing; Maca, Cielo; Goldman, Michael; Heitner, Sheila; Yee, Herman; Pipiya, Teona; Rom, William N; Hay, John G

    2003-03-20

    Oncolytic replicating adenoviruses are a promising new modality for the treatment of cancer. Despite the assumed biologic advantage of continued viral replication and spread from infected to uninfected cancer cells, early clinical trials demonstrate that the efficacy of current vectors is limited. In xenograft tumor models using immune-incompetent mice, wild-type adenovirus is also rarely able to eradicate established tumors. This suggests that innate immune mechanisms may clear the virus or that barriers within the tumor prevent viral spread. The aim of this study was to evaluate the kinetics of viral distribution and spread after intratumoral injection of virus in a human tumor xenograft model. After intratumoral injection of wild-type virus, high levels of titratable virus persisted within the xenograft tumors for at least 8 weeks. Virus distribution within the tumors as determined by immunohistochemistry was patchy, and virus-infected cells appeared to be flanked by tumor necrosis and connective tissue. The close proximity of virus-infected cells to the tumor-supporting structure, which is of murine origin, was clearly demonstrated using a DNA probe that specifically hybridizes to the B1 murine DNA repeat. Importantly, although virus was cleared from the circulation 6 hr after intratumoral injection, after 4 weeks systemic spread of virus was detected. In addition, vessels of infected tumors were surrounded by necrosis and an advancing rim of virus-infected tumor cells, suggesting reinfection of the xenograft tumor through the vasculature. These data suggest that human adenoviral spread within tumor xenografts is impaired by murine tumor-supporting structures. In addition, there is evidence for continued viral replication within the tumor, with subsequent systemic dissemination and reinfection of tumors via the tumor vasculature. Despite the limitations of immune-incompetent models, an understanding of the interactions between the virus and the tumor

  9. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  10. Phantom image results of an optimized full 3D USCT

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

    2012-03-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3DUSCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3DUSCT with nearly isotropic 3DPSF, realizing for the first time the full benefits of a 3Dsystem. In this paper results of the 3D point spread function measured with a dedicated phantom and images acquired with a clinical breast phantom are presented. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3TeslaMRI volume of the breast phantom. Image quality and resolution is isotropic in all three dimensions, confirming the successful optimization experimentally.

  11. Dosimetric Analysis of 3D Image-Guided HDR Brachytherapy Planning for the Treatment of Cervical Cancer: Is Point A-Based Dose Prescription Still Valid in Image-Guided Brachytherapy?

    SciTech Connect

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M. Saiful

    2011-07-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 {+-} 4.3 Gy. This is significantly higher (p < 0.0001) than the mean value of the dose to Point A (78.6 {+-} 4.4 Gy). The dose levels of the OARs were within acceptable limits for most patients. The mean dose to 2 mL of bladder was 78.0 {+-} 6.2 Gy, whereas the mean dose to rectum and sigmoid were 57.2 {+-} 4.4 Gy and 66.9 {+-} 6.1 Gy, respectively. Image-based 3D brachytherapy provides adequate dose coverage to HRCTV, with acceptable dose to OARs in most patients. Dose to Point A was found to be significantly lower than the D90 for HRCTV calculated using the image-based technique. Paradigm shift from 2D point dose dosimetry to IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities.

  12. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    NASA Astrophysics Data System (ADS)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  13. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  14. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  15. SU-E-T-157: Evaluation and Comparison of Doses to Pelvic Lymph Nodes and to Point B with 3D Image Guided Treatment Planning for High Dose Brachytherapy for Treatment of Cervical Cancer

    SciTech Connect

    Bhandare, N.

    2014-06-01

    Purpose: To estimate and compare the doses received by the obturator, external and internal iliac lymph nodes and point Methods: CT-MR fused image sets of 15 patients obtained for each of 5 fractions of HDR brachytherapy using tandem and ring applicator, were used to generate treatment plans optimized to deliver a prescription dose to HRCTV-D90 and to minimize the doses to organs at risk (OARs). For each set of image, target volume (GTV, HRCTV) OARs (Bladder, Rectum, Sigmoid), and both left and right pelvic lymph nodes (obturator, external and internal iliac lymph nodes) were delineated. Dose-volume histograms (DVH) were generated for pelvic nodal groups (left and right obturator group, internal and external iliac chains) Per fraction DVH parameters used for dose comparison included dose to 100% volume (D100), and dose received by 2cc (D2cc), 1cc (D1cc) and 0.1 cc (D0.1cc) of nodal volume. Dose to point B was compared with each DVH parameter using 2 sided t-test. Pearson correlation were determined to examine relationship of point B dose with nodal DVH parameters. Results: FIGO clinical stage varied from 1B1 to IIIB. The median pretreatment tumor diameter measured on MRI was 4.5 cm (2.7– 6.4cm).The median dose to bilateral point B was 1.20 Gy ± 0.12 or 20% of the prescription dose. The correlation coefficients were all <0.60 for all nodal DVH parameters indicating low degree of correlation. Only 2 cc of obturator nodes was not significantly different from point B dose on t-test. Conclusion: Dose to point B does not adequately represent the dose to any specific pelvic nodal group. When using image guided 3D dose-volume optimized treatment nodal groups should be individually identified and delineated to obtain the doses received by pelvic nodes.

  16. Determining the resolution limits of electron-beam lithography: direct measurement of the point-spread function.

    PubMed

    Manfrinato, Vitor R; Wen, Jianguo; Zhang, Lihua; Yang, Yujia; Hobbs, Richard G; Baker, Bowen; Su, Dong; Zakharov, Dmitri; Zaluzec, Nestor J; Miller, Dean J; Stach, Eric A; Berggren, Karl K

    2014-08-13

    One challenge existing since the invention of electron-beam lithography (EBL) is understanding the exposure mechanisms that limit the resolution of EBL. To overcome this challenge, we need to understand the spatial distribution of energy density deposited in the resist, that is, the point-spread function (PSF). During EBL exposure, the processes of electron scattering, phonon, photon, plasmon, and electron emission in the resist are combined, which complicates the analysis of the EBL PSF. Here, we show the measurement of delocalized energy transfer in EBL exposure by using chromatic aberration-corrected energy-filtered transmission electron microscopy (EFTEM) at the sub-10 nm scale. We have defined the role of spot size, electron scattering, secondary electrons, and volume plasmons in the lithographic PSF by performing EFTEM, momentum-resolved electron energy loss spectroscopy (EELS), sub-10 nm EBL, and Monte Carlo simulations. We expect that these results will enable alternative ways to improve the resolution limit of EBL. Furthermore, our approach to study the resolution limits of EBL may be applied to other lithographic techniques where electrons also play a key role in resist exposure, such as ion-beam-, X-ray-, and extreme-ultraviolet lithography. PMID:24960635

  17. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    SciTech Connect

    Schaefferkoetter, Joshua; Ouyang, Jinsong; Rakvongthai, Yothin; El Fakhri, Georges; Nappi, Carmela

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as compared to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.

  18. Equatorial spread F initiation and growth from satellite traces as revealed from conjugate point observations in Brazil

    NASA Astrophysics Data System (ADS)

    Abdu, M. A.; Kherani, E. A.; Batista, I. S.; Reinisch, B. W.; Sobral, J. H. A.

    2014-01-01

    better understanding of the precursor conditions for the instability growth is very important for identifying the causes of day-to-day variability in the equatorial spread F (ESF)/plasma bubble irregularity development. We investigate here the satellite trace (S-trace) in the ionograms, a precursor to the postsunset ESF occurrence, as observed by Digisondes operated at an equatorial and two magnetic conjugate sites in Brazil during a 66 day observational campaign (Conjugate Point Equatorial Experiment 2002). The satellite traces first occur at the equatorial site, and sequentially, after a variable delay of approximately 20 to 50 min, they are observed nearly simultaneously over the two conjugate sites. The evening prereversal enhancement in the zonal electric field/vertical drift is found to control its development. Using a three-dimensional simulation code based on collisional interchange instability mechanism, it is shown that the observed S-trace occurrence sequence is fully consistent with the instability initiation over the equator with the field-aligned plasma depletion vertical growth marked by latitudinal expansion of its extremities to conjugate locations. The delay in the S-trace occurrence at the conjugate sites (a measure of the nonlinear growth of the instability for plasma depletion) is controlled also by field line parallel (meridional) neutral wind. The relationship between the S-trace and the large-scale wave structure in the F layer, another widely known characterization of the precursor condition for the ESF development, is also clarified.

  19. Impact of the point spread function on maximum standardized uptake value measurements in patients with pulmonary cancer.

    PubMed

    Gellee, S; Page, J; Sanghera, B; Payoux, P; Wagner, Thomas

    2014-05-01

    Maximum standardized uptake value (SUVmax) from fluorodeoxyglucose (FDG) positron emission tomography (PET) scans is a semi quantitative measure that is increasingly used in the clinical practice for diagnostic and therapeutic response assessment purposes. Technological advances such as the implementation of the point spread function (PSF) in the reconstruction algorithm have led to higher signal to noise ratio and increased spatial resolution. The impact on SUVmax measurements has not been studied in clinical setting. We studied the impact of PSF on SUVmax in 30 consecutive lung cancer patients. SUVmax values were measured on PET-computed tomography (CT) scans reconstructed iteratively with and without PSF (respectively high-definition [HD] and non-HD). HD SUVmax values were significantly higher than non-HD SUVmax. There was excellent correlation between HD and non-HD values. Details of reconstruction and PSF implementation in particular have important consequences on SUV values. Nuclear Medicine physicians and radiologists should be aware of the reconstruction parameters of PET-CT scans when they report or rely on SUV measurements. PMID:25191128

  20. Depth profiling of gold nanoparticles and characterization of point spread functions in reconstructed and human skin using multiphoton microscopy.

    PubMed

    Labouta, Hagar I; Hampel, Martina; Thude, Sibylle; Reutlinger, Katharina; Kostka, Karl-Heinz; Schneider, Marc

    2012-01-01

    Multiphoton microscopy has become popular in studying dermal nanoparticle penetration. This necessitates studying the imaging parameters of multiphoton microscopy in skin as an imaging medium, in terms of achievable detection depths and the resolution limit. This would simulate real-case scenarios rather than depending on theoretical values determined under ideal conditions. This study has focused on depth profiling of sub-resolution gold nanoparticles (AuNP) in reconstructed (fixed and unfixed) and human skin using multiphoton microscopy. Point spread functions (PSF) were determined for the used water-immersion objective of 63×/NA = 1.2. Factors such as skin-tissue compactness and the presence of wrinkles were found to deteriorate the accuracy of depth profiling. A broad range of AuNP detectable depths (20-100 μm) in reconstructed skin was observed. AuNP could only be detected up to ∼14 μm depth in human skin. Lateral (0.5 ± 0.1 μm) and axial (1.0 ± 0.3 μm) PSF in reconstructed and human specimens were determined. Skin cells and intercellular components didn't degrade the PSF with depth. In summary, the imaging parameters of multiphoton microscopy in skin and practical limitations encountered in tracking nanoparticle penetration using this approach were investigated.

  1. PET image reconstruction with a system matrix containing point spread function derived from single photon incidence response

    NASA Astrophysics Data System (ADS)

    Fan, Xin; Wang, Hai-Peng; Yun, Ming-Kai; Sun, Xiao-Li; Cao, Xue-Xiang; Liu, Shuang-Quan; Chai, Pei; Li, Dao-Wu; Liu, Bao-Dong; Wang, Lu; Wei, Long

    2015-01-01

    A point spread function (PSF) for the blurring component in positron emission tomography (PET) is studied. The PSF matrix is derived from the single photon incidence response function. A statistical iterative reconstruction (IR) method based on the system matrix containing the PSF is developed. More specifically, the gamma photon incidence upon a crystal array is simulated by Monte Carlo (MC) simulation, and then the single photon incidence response functions are calculated. Subsequently, the single photon incidence response functions are used to compute the coincidence blurring factor according to the physical process of PET coincidence detection. Through weighting the ordinary system matrix response by the coincidence blurring factors, the IR system matrix containing the PSF is finally established. By using this system matrix, the image is reconstructed by an ordered subset expectation maximization (OSEM) algorithm. The experimental results show that the proposed system matrix can substantially improve the image radial resolution, contrast, and noise property. Furthermore, the simulated single gamma-ray incidence response function depends only on the crystal configuration, so the method could be extended to any PET scanner with the same detector crystal configuration. Project supported by the National Natural Science Foundation of China (Grant Nos. Y4811H805C and 81101175).

  2. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  3. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  4. Evaluation of the monocular depth cue in 3D displays.

    PubMed

    Kim, Sung-Kyu; Kim, Dong-Wook; Kwon, Yong Moo; Son, Jung-Young

    2008-12-22

    Binocular disparity and monocular depth information are the principal functions of ideal 3D displays. 3D display systems such as stereoscopic or multi-view, super multi-view (SMV), and multi-focus (MF) displays were considered for the testing of the satisfaction level with the monocular accommodation of three different depths of 3D object points. The numerical simulation and experimental results show that the MF 3D display gives a monocular depth cue. In addition, the experimental results of the monocular MF 3D display show clear monocular focus on four different depths. Therefore, we can apply the MF 3D display to monocular 3D displays.

  5. Principle and characteristics of 3D display based on random source constructive interference.

    PubMed

    Li, Zhiyang

    2014-07-14

    The paper discusses the principle and characteristics of 3D display based on random source constructive interference (RSCI). The voxels of discrete 3D images are formed in the air via constructive interference of spherical light waves emitted by point light sources (PLSs) that are arranged at random positions to depress high order diffraction. The PLSs might be created by two liquid crystal panels sandwiched between two micro-lens arrays. The point spread function of the system revealed that it is able to reconstruct voxels with diffraction limited resolution over a large field width and depth. The high resolution was confirmed by the experiments. Theoretical analyses also shows that the system could provide a 3D image contrast and gray levels no less than that in liquid crystal panels. Compared with 2D display, it needs only additional depth information, which brings only about 30% data increment.

  6. Optical performance of the JWST/MIRI flight model: characterization of the point spread function at high resolution

    NASA Astrophysics Data System (ADS)

    Guillard, P.; Rodet, T.; Ronayette, S.; Amiaux, J.; Abergel, A.; Moreau, V.; Augueres, J. L.; Bensalem, A.; Orduna, T.; Nehmé, C.; Belu, A. R.; Pantin, E.; Lagage, P.-O.; Longval, Y.; Glasse, A. C. H.; Bouchet, P.; Cavarroc, C.; Dubreuil, D.; Kendrew, S.

    2010-07-01

    The Mid Infra Red Instrument (MIRI) is one of the four instruments onboard the James Webb Space Telescope (JWST), providing imaging, coronagraphy and spectroscopy over the 5 - 28 μm band. To verify the optical performance of the instrument, extensive tests were performed at CEA on the flight model (FM) of the Mid-InfraRed IMager (MIRIM) at cryogenic temperatures and in the infrared. This paper reports on the point spread function (PSF) measurements at 5.6 μm, the shortest operating wavelength for imaging. At 5.6 μm, the PSF is not Nyquist-sampled, so we use am original technique that combines a microscanning measurement strategy with a deconvolution algorithm to obtain an over-resolved MIRIM PSF. The microscanning consists in a sub-pixel scan of a point source on the focal plane. A data inversion method is used to reconstruct PSF images that are over-resolved by a factor of 7 compared to the native resolution of MIRI. We show that the FWHM of the high-resolution PSFs were 5 - 10 % wider than that obtained with Zemax simulations. The main cause was identified as an out-of-specification tilt of the M4 mirror. After correction, two additional test campaigns were carried out, and we show that the shape of the PSF is conform to expectations. The FWHM of the PSFs are 0.18 - 0.20 arcsec, in agreement with simulations. 56.1 - 59.2% of the total encircled energy (normalized to a 5 arcsec radius) is contained within the first dark Airy ring, over the whole field of view. At longer wavelengths (7.7 - 25.5 μm), this percentage is 57 - 68 %. MIRIM is thus compliant with the optical quality requirements. This characterization of the MIRIM PSF, as well as the deconvolution method presented here, are of particular importance, not only for the verification of the optical quality and the MIRI calibration, but also for scientific applications.

  7. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  8. Validation of image processing tools for 3-D fluorescence microscopy.

    PubMed

    Dieterlen, Alain; Xu, Chengqi; Gramain, Marie-Pierre; Haeberlé, Olivier; Colicchio, Bruno; Cudel, Christophe; Jacquey, Serge; Ginglinger, Emanuelle; Jung, Georges; Jeandidier, Eric

    2002-04-01

    3-D optical fluorescent microscopy becomes nowadays an efficient tool for volumic investigation of living biological samples. Using optical sectioning technique, a stack of 2-D images is obtained. However, due to the nature of the system optical transfer function and non-optimal experimental conditions, acquired raw data usually suffer from some distortions. In order to carry out biological analysis, raw data have to be restored by deconvolution. The system identification by the point-spread function is useful to obtain the knowledge of the actual system and experimental parameters, which is necessary to restore raw data. It is furthermore helpful to precise the experimental protocol. In order to facilitate the use of image processing techniques, a multi-platform-compatible software package called VIEW3D has been developed. It integrates a set of tools for the analysis of fluorescence images from 3-D wide-field or confocal microscopy. A number of regularisation parameters for data restoration are determined automatically. Common geometrical measurements and morphological descriptors of fluorescent sites are also implemented to facilitate the characterisation of biological samples. An example of this method concerning cytogenetics is presented.

  9. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  10. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  11. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  12. 3D facial expression modeling for recognition

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.; Dass, Sarat C.

    2005-03-01

    Current two-dimensional image based face recognition systems encounter difficulties with large variations in facial appearance due to the pose, illumination and expression changes. Utilizing 3D information of human faces is promising for handling the pose and lighting variations. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a facial surface matching framework to match multiview facial scans to a 3D face model, where the (non-rigid) expression deformation is explicitly modeled for each subject, resulting in a person-specific deformation model. The thin plate spline (TPS) is applied to model the deformation based on the facial landmarks. The deformation is applied to the 3D neutral expression face model to synthesize the corresponding expression. Both the neutral and the synthesized 3D surface models are used to match a test scan. The surface registration and matching between a test scan and a 3D model are achieved by a modified Iterative Closest Point (ICP) algorithm. Preliminary experimental results demonstrate that the proposed expression modeling and recognition-by-synthesis schemes improve the 3D matching accuracy.

  13. Probabilistic point source inversion of strong-motion data in 3-D media using pattern recognition: A case study for the 2008 Mw 5.4 Chino Hills earthquake

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; Trampert, Jeannot

    2016-08-01

    Despite the ever increasing availability of computational power, real-time source inversions based on physical modeling of wave propagation in realistic media remain challenging. We investigate how a nonlinear Bayesian approach based on pattern recognition and synthetic 3-D Green's functions can be used to rapidly invert strong-motion data for point source parameters by means of a case study for a fault system in the Los Angeles Basin. The probabilistic inverse mapping is represented in compact form by a neural network which yields probability distributions over source parameters. It can therefore be evaluated rapidly and with very moderate CPU and memory requirements. We present a simulated real-time inversion of data for the 2008 Mw 5.4 Chino Hills event. Initial estimates of epicentral location and magnitude are available ˜14 s after origin time. The estimate can be refined as more data arrive: by ˜40 s, fault strike and source depth can also be determined with relatively high certainty.

  14. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  15. Near field 3D scene simulation for passive microwave imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wu, Ji

    2006-10-01

    Scene simulation is a necessary work in near field passive microwave remote sensing. A 3-D scene simulation model of microwave radiometric imaging based on ray tracing method is present in this paper. The essential influencing factors and general requirements are considered in this model such as the rough surface radiation, the sky radiation witch act as the uppermost illuminator in out door circumstance, the polarization rotation of the temperature rays caused by multiple reflections, and the antenna point spread function witch determines the resolution of the model final outputs. Using this model we simulate a virtual scene and analyzed the appeared microwave radiometric phenomenology, at last two real scenes of building and airstrip were simulated for validating the model. The comparison between the simulation and field measurements indicates that this model is completely feasible in practice. Furthermore, we analyzed the signatures of model outputs, and achieved some underlying phenomenology of microwave radiation witch is deferent with that in optical and infrared bands.

  16. Determination of the wave-front aberration function from measured values of the point-spread function: a two-dimensional phase retrieval problem.

    NASA Astrophysics Data System (ADS)

    Barakat, R.; Sandler, B. H.

    1992-10-01

    The authors outline a method for the determination of the unknown wave-front aberration function of an optical system from noisy measurements of the corresponding point-spread function. The problem is cast as a nonlinear unconstrained minimization problem, and trust region techniques are employed for its solution in conjunction with analytic evaluations of the Jacobian and Hessian matrices governing slope and curvature information. Some illustrative numerical results are presented and discussed.

  17. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  18. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  19. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  20. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  1. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  2. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  3. Design of 3d Topological Data Structure for 3d Cadastre Objects

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. A.; Rahman, A. Abdul; Hassan, M. I.

    2016-09-01

    This paper describes the design of 3D modelling and topological data structure for cadastre objects based on Land Administration Domain Model (LADM) specifications. Tetrahedral Network (TEN) is selected as a 3D topological data structure for this project. Data modelling is based on the LADM standard and it is used five classes (i.e. point, boundary face string, boundary face, tetrahedron and spatial unit). This research aims to enhance the current cadastral system by incorporating 3D topology model based on LADM standard.

  4. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  5. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  6. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  7. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  8. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  9. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  10. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  11. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  12. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-20

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices.

  13. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  14. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  15. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  16. Superplot3d: an open source GUI tool for 3d trajectory visualisation and elementary processing.

    PubMed

    Whitehorn, Luke J; Hawkes, Frances M; Dublon, Ian An

    2013-09-30

    When acquiring simple three-dimensional (3d) trajectory data it is common to accumulate large coordinate data sets. In order to examine integrity and consistency of object tracking, it is often necessary to rapidly visualise these data. Ordinarily, to achieve this the user must either execute 3d plotting functions in a numerical computing environment or manually inspect data in two dimensions, plotting each individual axis.Superplot3d is an open source MATLAB script which takes tab delineated Cartesian data points in the form x, y, z and time and generates an instant visualization of the object's trajectory in free-rotational three dimensions. Whole trajectories may be instantly presented, allowing for rapid inspection. Executable from the MATLAB command line (or deployable as a compiled standalone application) superplot3d also provides simple GUI controls to obtain rudimentary trajectory information, allow specific visualization of trajectory sections and perform elementary processing.Superplot3d thus provides a framework for non-programmers and programmers alike, to recreate recently acquired 3d object trajectories in rotatable 3d space. It is intended, via the use of a preference driven menu to be flexible and work with output from multiple tracking software systems. Source code and accompanying GUIDE .fig files are provided for deployment and further development.

  17. Superplot3d: an open source GUI tool for 3d trajectory visualisation and elementary processing.

    PubMed

    Whitehorn, Luke J; Hawkes, Frances M; Dublon, Ian An

    2013-01-01

    When acquiring simple three-dimensional (3d) trajectory data it is common to accumulate large coordinate data sets. In order to examine integrity and consistency of object tracking, it is often necessary to rapidly visualise these data. Ordinarily, to achieve this the user must either execute 3d plotting functions in a numerical computing environment or manually inspect data in two dimensions, plotting each individual axis.Superplot3d is an open source MATLAB script which takes tab delineated Cartesian data points in the form x, y, z and time and generates an instant visualization of the object's trajectory in free-rotational three dimensions. Whole trajectories may be instantly presented, allowing for rapid inspection. Executable from the MATLAB command line (or deployable as a compiled standalone application) superplot3d also provides simple GUI controls to obtain rudimentary trajectory information, allow specific visualization of trajectory sections and perform elementary processing.Superplot3d thus provides a framework for non-programmers and programmers alike, to recreate recently acquired 3d object trajectories in rotatable 3d space. It is intended, via the use of a preference driven menu to be flexible and work with output from multiple tracking software systems. Source code and accompanying GUIDE .fig files are provided for deployment and further development. PMID:24079529

  18. 3D-GNOME: an integrated web service for structural modeling of the 3D genome.

    PubMed

    Szalaj, Przemyslaw; Michalski, Paul J; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-07-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/.

  19. 3D-GNOME: an integrated web service for structural modeling of the 3D genome

    PubMed Central

    Szalaj, Przemyslaw; Michalski, Paul J.; Wróblewski, Przemysław; Tang, Zhonghui; Kadlof, Michal; Mazzocco, Giovanni; Ruan, Yijun; Plewczynski, Dariusz

    2016-01-01

    Recent advances in high-throughput chromosome conformation capture (3C) technology, such as Hi-C and ChIA-PET, have demonstrated the importance of 3D genome organization in development, cell differentiation and transcriptional regulation. There is now a widespread need for computational tools to generate and analyze 3D structural models from 3C data. Here we introduce our 3D GeNOme Modeling Engine (3D-GNOME), a web service which generates 3D structures from 3C data and provides tools to visually inspect and annotate the resulting structures, in addition to a variety of statistical plots and heatmaps which characterize the selected genomic region. Users submit a bedpe (paired-end BED format) file containing the locations and strengths of long range contact points, and 3D-GNOME simulates the structure and provides a convenient user interface for further analysis. Alternatively, a user may generate structures using published ChIA-PET data for the GM12878 cell line by simply specifying a genomic region of interest. 3D-GNOME is freely available at http://3dgnome.cent.uw.edu.pl/. PMID:27185892

  20. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  2. 3D-Measuring for Head Shape Covering Hair

    NASA Astrophysics Data System (ADS)

    Kato, Tsukasa; Hattori, Koosuke; Nomura, Takuya; Taguchi, Ryo; Hoguro, Masahiro; Umezaki, Taizo

    3D-Measuring is paid to attention because 3D-Display is making rapid spread. Especially, face and head are required to be measured because of necessary or contents production. However, it is a present problem that it is difficult to measure hair. Then, in this research, it is a purpose to measure face and hair with phase shift method. By using sine images arranged for hair measuring, the problems on hair measuring, dark color and reflection, are settled.

  3. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

  4. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  6. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  7. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  8. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  9. Adaptive optics point spread function reconstruction: lessons learned from on-sky experiment on Altair/Gemini and pathway for future systems

    NASA Astrophysics Data System (ADS)

    Jolissaint, Laurent; Christou, Julian; Wizinowich, Peter; Tolstoy, Eline

    2010-07-01

    We present the results of an on-sky point spread function reconstruction (PSF-R) experiment for the Gemini North telescope adaptive optics system, Altair, in the simplest mode, bright on-axis natural guise star. We demonstrate that our PSF-R method does work for system performance diagnostic but suffers from hidden telescope and system aberrations that are not accounted for in the model, making the reconstruction unsuccessful for Altair, for now. We discuss the probable origin of the discrepancy. In the last section, we propose alternative PSF-R methods for future multiple natural and laser guide stars systems.

  10. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  11. A 2D driven 3D vessel segmentation algorithm for 3D digital subtraction angiography data

    NASA Astrophysics Data System (ADS)

    Spiegel, M.; Redel, T.; Struffert, T.; Hornegger, J.; Doerfler, A.

    2011-10-01

    Cerebrovascular disease is among the leading causes of death in western industrial nations. 3D rotational angiography delivers indispensable information on vessel morphology and pathology. Physicians make use of this to analyze vessel geometry in detail, i.e. vessel diameters, location and size of aneurysms, to come up with a clinical decision. 3D segmentation is a crucial step in this pipeline. Although a lot of different methods are available nowadays, all of them lack a method to validate the results for the individual patient. Therefore, we propose a novel 2D digital subtraction angiography (DSA)-driven 3D vessel segmentation and validation framework. 2D DSA projections are clinically considered as gold standard when it comes to measurements of vessel diameter or the neck size of aneurysms. An ellipsoid vessel model is applied to deliver the initial 3D segmentation. To assess the accuracy of the 3D vessel segmentation, its forward projections are iteratively overlaid with the corresponding 2D DSA projections. Local vessel discrepancies are modeled by a global 2D/3D optimization function to adjust the 3D vessel segmentation toward the 2D vessel contours. Our framework has been evaluated on phantom data as well as on ten patient datasets. Three 2D DSA projections from varying viewing angles have been used for each dataset. The novel 2D driven 3D vessel segmentation approach shows superior results against state-of-the-art segmentations like region growing, i.e. an improvement of 7.2% points in precision and 5.8% points for the Dice coefficient. This method opens up future clinical applications requiring the greatest vessel accuracy, e.g. computational fluid dynamic modeling.

  12. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  13. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  14. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  17. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  18. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  1. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  2. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  3. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  4. Computational optical-sectioning microscopy for 3D quantization of cell motion: results and challenges

    NASA Astrophysics Data System (ADS)

    McNally, James G.

    1994-09-01

    How cells move and navigate within a 3D tissue mass is of central importance in such diverse problems as embryonic development, wound healing and metastasis. This locomotion can now be visualized and quantified by using computation optical-sectioning microscopy. In this approach, a series of 2D images at different depths in a specimen are stacked to construct a 3D image, and then with a knowledge of the microscope's point-spread function, the actual distribution of fluorescent intensity in the specimen is estimated via computation. When coupled with wide-field optics and a cooled CCD camera, this approach permits non-destructive 3D imaging of living specimens over long time periods. With these techniques, we have observed a complex diversity of motile behaviors in a model embryonic system, the cellular slime mold Dictyostelium. To understand the mechanisms which control these various behaviors, we are examining motion in various Dictyostelium mutants with known defects in proteins thought to be essential for signal reception, cell-cell adhesion or locomotion. This application of computational techniques to analyze 3D cell locomotion raises several technical challenges. Image restoration techniques must be fast enough to process numerous 1 Gbyte time-lapse data sets (16 Mbytes per 3D image X 60 time points). Because some cells are weakly labeled and background intensity is often high due to unincorporated dye, the SNR in some of these images is poor. Currently, the images are processed by a regularized linear least- squares restoration method, and occasionally by a maximum-likelihood method. Also required for these studies are accurate automated- tracking procedures to generate both 3D trajectories for individual cells and 3D flows for a group of cells. Tracking is currently done independently for each cell, using a cell's image as a template to search for a similar image at the next time point. Finally, sophisticated visualization techniques are needed to view the

  5. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-03-01

    Since the behaviour of proteins and biological molecules is tightly related to cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution. Since protein dynamics inside a cell involve all three dimensions, we developed an automated routine for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  6. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  7. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  8. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  9. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  10. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  11. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  12. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the

  13. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  14. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  17. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  18. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. Surface reconstruction for 3D remote sensing

    NASA Astrophysics Data System (ADS)

    Baran, Matthew S.; Tutwiler, Richard L.; Natale, Donald J.

    2012-05-01

    This paper examines the performance of the local level set method on the surface reconstruction problem for unorganized point clouds in three dimensions. Many laser-ranging, stereo, and structured light devices produce three dimensional information in the form of unorganized point clouds. The point clouds are sampled from surfaces embedded in R3 from the viewpoint of a camera focal plane or laser receiver. The reconstruction of these objects in the form of a triangulated geometric surface is an important step in computer vision and image processing. The local level set method uses a Hamilton-Jacobi partial differential equation to describe the motion of an implicit surface in threespace. An initial surface which encloses the data is allowed to move until it becomes a smooth fit of the unorganized point data. A 3D point cloud test suite was assembled from publicly available laser-scanned object databases. The test suite exhibits nonuniform sampling rates and various noise characteristics to challenge the surface reconstruction algorithm. Quantitative metrics are introduced to capture the accuracy and efficiency of surface reconstruction on the degraded data. The results characterize the robustness of the level set method for surface reconstruction as applied to 3D remote sensing.

  20. Development of a 3D-AFM for true 3D measurements of nanostructures

    NASA Astrophysics Data System (ADS)

    Dai, Gaoliang; Häßler-Grohne, Wolfgang; Hüser, Dorothee; Wolff, Helmut; Danzebrink, Hans-Ulrich; Koenders, Ludger; Bosse, Harald

    2011-09-01

    The development of advanced lithography requires highly accurate 3D metrology methods for small line structures of both wafers and photomasks. Development of a new 3D atomic force microscopy (3D-AFM) with vertical and torsional oscillation modes is introduced in this paper. In its configuration, the AFM probe is oscillated using two piezo actuators driven at vertical and torsional resonance frequencies of the cantilever. In such a way, the AFM tip can probe the surface with a vertical and a lateral oscillation, offering high 3D probing sensitivity. In addition, a so-called vector approach probing (VAP) method has been applied. The sample is measured point-by-point using this method. At each probing point, the tip is approached towards the surface until the desired tip-sample interaction is detected and then immediately withdrawn from the surface. Compared to conventional AFMs, where the tip is kept continuously in interaction with the surface, the tip-sample interaction time using the VAP method is greatly reduced and consequently the tip wear is reduced. Preliminary experimental results show promising performance of the developed system. A measurement of a line structure of 800 nm height employing a super sharp AFM tip could be performed with a repeatability of its 3D profiles of better than 1 nm (p-v). A line structure of a Physikalisch-Technische Bundesanstalt photomask with a nominal width of 300 nm has been measured using a flared tip AFM probe. The repeatability of the middle CD values reaches 0.28 nm (1σ). A long-term stability investigation shows that the 3D-AFM has a high stability of better than 1 nm within 197 measurements taken over 30 h, which also confirms the very low tip wear.

  1. Complex light in 3D printing

    NASA Astrophysics Data System (ADS)

    Moser, Christophe; Delrot, Paul; Loterie, Damien; Morales Delgado, Edgar; Modestino, Miguel; Psaltis, Demetri

    2016-03-01

    3D printing as a tool to generate complicated shapes from CAD files, on demand, with different materials from plastics to metals, is shortening product development cycles, enabling new design possibilities and can provide a mean to manufacture small volumes cost effectively. There are many technologies for 3D printing and the majority uses light in the process. In one process (Multi-jet modeling, polyjet, printoptical©), a printhead prints layers of ultra-violet curable liquid plastic. Here, each nozzle deposits the material, which is then flooded by a UV curing lamp to harden it. In another process (Stereolithography), a focused UV laser beam provides both the spatial localization and the photo-hardening of the resin. Similarly, laser sintering works with metal powders by locally melting the material point by point and layer by layer. When the laser delivers ultra-fast focused pulses, nonlinear effects polymerize the material with high spatial resolution. In these processes, light is either focused in one spot and the part is made by scanning it or the light is expanded and covers a wide area for photopolymerization. Hence a fairly "simple" light field is used in both cases. Here, we give examples of how "complex light" brings additional level of complexity in 3D printing.

  2. 3D microscopy for microfabrication quality control

    NASA Astrophysics Data System (ADS)

    Muller, Matthew S.; De Jean, Paul D.

    2015-03-01

    A novel stereo microscope adapter, the SweptVue, has been developed to rapidly perform quantitative 3D microscopy for cost-effective microfabrication quality control. The SweptVue adapter uses the left and right stereo channels of an Olympus SZX7 stereo microscope for sample illumination and detection, respectively. By adjusting the temporal synchronization between the illumination lines projected from a Texas Instruments DLP LightCrafter and the rolling shutter on a Point Grey Flea3 CMOS camera, micrometer-scale depth features can be easily and rapidly measured at up to 5 μm resolution on a variety of microfabricated samples. In this study, the build performance of an industrial-grade Stratasys Object 300 Connex 3D printer was examined. Ten identical parts were 3D printed with a lateral and depth resolution of 42 μm and 30 μm, respectively, using both a rigid and flexible Stratasys PolyJet material. Surface elevation precision and accuracy was examined over multiple regions of interest on plateau and hemispherical surfaces. In general, the dimensions of the examined features were reproducible across the parts built using both materials. However, significant systemic lateral and height build errors were discovered, such as: decreased heights when approaching the edges of plateaus, inaccurate height steps, and poor tolerances on channel width. For 3D printed parts to be used in functional applications requiring micro-scale tolerances, they need to conform to specification. Despite appearing identical, our 3D printed parts were found to have a variety of defects that the SweptVue adapter quickly revealed.

  3. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  4. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  5. 3D-printed microfluidic automation.

    PubMed

    Au, Anthony K; Bhattacharjee, Nirveek; Horowitz, Lisa F; Chang, Tim C; Folch, Albert

    2015-04-21

    Microfluidic automation - the automated routing, dispensing, mixing, and/or separation of fluids through microchannels - generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology's use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer.

  6. 3D-printed microfluidic automation.

    PubMed

    Au, Anthony K; Bhattacharjee, Nirveek; Horowitz, Lisa F; Chang, Tim C; Folch, Albert

    2015-04-21

    Microfluidic automation - the automated routing, dispensing, mixing, and/or separation of fluids through microchannels - generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology's use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer. PMID:25738695

  7. 3D-Printed Microfluidic Automation

    PubMed Central

    Au, Anthony K.; Bhattacharjee, Nirveek; Horowitz, Lisa F.; Chang, Tim C.; Folch, Albert

    2015-01-01

    Microfluidic automation – the automated routing, dispensing, mixing, and/or separation of fluids through microchannels – generally remains a slowly-spreading technology because device fabrication requires sophisticated facilities and the technology’s use demands expert operators. Integrating microfluidic automation in devices has involved specialized multi-layering and bonding approaches. Stereolithography is an assembly-free, 3D-printing technique that is emerging as an efficient alternative for rapid prototyping of biomedical devices. Here we describe fluidic valves and pumps that can be stereolithographically printed in optically-clear, biocompatible plastic and integrated within microfluidic devices at low cost. User-friendly fluid automation devices can be printed and used by non-engineers as replacement for costly robotic pipettors or tedious manual pipetting. Engineers can manipulate the designs as digital modules into new devices of expanded functionality. Printing these devices only requires the digital file and electronic access to a printer. PMID:25738695

  8. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  9. Unit cell geometry of 3-D braided structures

    NASA Technical Reports Server (NTRS)

    Du, Guang-Wu; Ko, Frank K.

    1993-01-01

    The traditional approach used in modeling of composites reinforced by three-dimensional (3-D) braids is to assume a simple unit cell geometry of a 3-D braided structure with known fiber volume fraction and orientation. In this article, we first examine 3-D braiding methods in the light of braid structures, followed by the development of geometric models for 3-D braids using a unit cell approach. The unit cell geometry of 3-D braids is identified and the relationship of structural parameters such as yarn orientation angle and fiber volume fraction with the key processing parameters established. The limiting geometry has been computed by establishing the point at which yarns jam against each other. Using this factor makes it possible to identify the complete range of allowable geometric arrangements for 3-D braided preforms. This identified unit cell geometry can be translated to mechanical models which relate the geometrical properties of fabric preforms to the mechanical responses of composite systems.

  10. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  11. 3D hyperpolarized He-3 MRI of ventilation using a multi-echo projection acquisition

    PubMed Central

    Holmes, James H.; O’Halloran, Rafael L.; Brodsky, Ethan K.; Jung, Youngkyoo; Block, Walter F.; Fain, Sean B.

    2010-01-01

    A method is presented for high resolution 3D imaging of the whole lung using inhaled hyperpolarized (HP) He-3 MR with multiple half-echo radial trajectories that can accelerate imaging through undersampling. A multiple half-echo radial trajectory can be used to reduce the level of artifact for undersampled 3D projection reconstruction (PR) imaging by increasing the amount of data acquired per unit time for HP He-3 lung imaging. The point spread functions (PSFs) for breath-held He-3 MRI using multiple half-echo trajectories were evaluated using simulations to predict the effects of T2* and gas diffusion on image quality. Results from PSF simulations were consistent with imaging results in volunteer studies showing improved image quality with increasing number of echoes using up to 8 half-echoes. The 8 half-echo acquisition is shown to accommodate lost breath-holds as short as 6 s using a retrospective reconstruction at reduced resolution as well as to allow reduced breath-hold time compared to an equivalent Cartesian trajectory. Furthermore, preliminary results from a 3D dynamic inhalation-exhalation maneuver are demonstrated using the 8 half-echo trajectory. Results demonstrate the first high resolution 3D PR imaging of ventilation and respiratory dynamics in humans using HP He-3 MR. PMID:18429034

  12. 3D Geomodeling of the Venezuelan Andes

    NASA Astrophysics Data System (ADS)

    Monod, B.; Dhont, D.; Hervouet, Y.; Backé, G.; Klarica, S.; Choy, J. E.

    2010-12-01

    The crustal structure of the Venezuelan Andes is investigated thanks to a geomodel. The method integrates surface structural data, remote sensing imagery, crustal scale balanced cross-sections, earthquake locations and focal mechanism solutions to reconstruct fault surfaces at the scale of the mountain belt into a 3D environment. The model proves to be essential for understanding the basic processes of both the orogenic float and the tectonic escape involved in the Plio-Quaternary evolution of the orogen. The reconstruction of the Bocono and Valera faults reveals the 3D shape of the Trujillo block whose geometry can be compared to a boat bow floating over a mid-crustal detachment horizon emerging at the Bocono-Valera triple junction. Motion of the Trujillo block is accompanied by a generalized extension in the upper crust accommodated by normal faults with listric geometries such as for the Motatan, Momboy and Tuñame faults. Extension may be related to the lateral spreading of the upper crust, suggesting that gravity forces play an important role in the escape process.

  13. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. PMID:25528570

  14. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  15. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  16. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  17. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  18. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  19. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  20. Towards a noninvasive intracranial tumor irradiation using 3d optical imaging and multimodal data registration.

    PubMed

    Posada, R; Daul, Ch; Wolf, D; Aletti, P

    2007-01-01

    Conformal radiotherapy (CRT) results in high-precision tumor volume irradiation. In fractioned radiotherapy (FRT), lesions are irradiated in several sessions so that healthy neighbouring tissues are better preserved than when treatment is carried out in one fraction. In the case of intracranial tumors, classical methods of patient positioning in the irradiation machine coordinate system are invasive and only allow for CRT in one irradiation session. This contribution presents a noninvasive positioning method representing a first step towards the combination of CRT and FRT. The 3D data used for the positioning is point clouds spread over the patient's head (CT-data usually acquired during treatment) and points distributed over the patient's face which are acquired with a structured light sensor fixed in the therapy room. The geometrical transformation linking the coordinate systems of the diagnosis device (CT-modality) and the 3D sensor of the therapy room (visible light modality) is obtained by registering the surfaces represented by the two 3D point sets. The geometrical relationship between the coordinate systems of the 3D sensor and the irradiation machine is given by a calibration of the sensor position in the therapy room. The global transformation, computed with the two previous transformations, is sufficient to predict the tumor position in the irradiation machine coordinate system with only the corresponding position in the CT-coordinate system. Results obtained for a phantom show that the mean positioning error of tumors on the treatment machine isocentre is 0.4 mm. Tests performed with human data proved that the registration algorithm is accurate (0.1 mm mean distance between homologous points) and robust even for facial expression changes.

  1. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.

  2. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  3. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  4. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  5. Sequential assembly of 3D perfusable microfluidic hydrogels.

    PubMed

    He, Jiankang; Zhu, Lin; Liu, Yaxiong; Li, Dichen; Jin, Zhongmin

    2014-11-01

    Bottom-up tissue engineering provides a promising way to recreate complex structural organizations of native organs in artificial constructs by assembling functional repeating modules. However, it is challenging for current bottom-up strategies to simultaneously produce a controllable and immediately perfusable microfluidic network in modularly assembled 3D constructs. Here we presented a bottom-up strategy to produce perfusable microchannels in 3D hydrogels by sequentially assembling microfluidic modules. The effects of agarose-collagen composition on microchannel replication and 3D assembly of hydrogel modules were investigated. The unique property of predefined microchannels in transporting fluids within 3D assemblies was evaluated. Endothelial cells were incorporated into the microfluidic network of 3D hydrogels for dynamic culture in a house-made bioreactor system. The results indicated that the sequential assembly method could produce interconnected 3D predefined microfluidic networks in optimized agarose-collagen hydrogels, which were fully perfusable and successfully functioned as fluid pathways to facilitate the spreading of endothelial cells. We envision that the presented method could be potentially used to engineer 3D vascularized parenchymal constructs by encapsulating primary cells in bulk hydrogels and incorporating endothelial cells in predefined microchannels. PMID:25027302

  6. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  7. Viewing 3D MRI data in perspective

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Chin, Chialei

    2000-10-01

    In medical imaging applications, 3D morphological data set is often presented in 2D format without considering visual perspective. Without perspective, the resulting image can be counterintuitive to natural human visual perception, specially in a setting of MR guided neurosurgical procedure where depth perception is crucial. To address this problem we have developed a new projection scheme that incorporates linear perspective transformation in various image reconstructions, including MR angiographical projection. In the scheme, an imaginary picture plane (PP) can be placed within or immediately in front of a 3D object, and the stand point (SP) of an observer is fixed at a normal viewing distance os 25 cm in front of the picture plane. A clinical 3D angiography data set (TR/TF/Flipequals30/5.4/15) was obtained from a patient head on a 1.5T MR scanner in 4 min 10 sec (87.5% rectangular, 52% scan). The length, width and height of the image volume were 200mm, 200mm and 72.4mm respectively, corresponding to an effective matrix size of 236x512x44 in transverse orientation (512x512x88 after interpolation). Maximum intensity project (MaxIP) algorithm was used along the viewing trace of perspective projection than rather the parallel projection. Consecutive 36 views were obtained at a 10 degree interval azimuthally. When displayed in cine-mode, the new MaxIP images appeared realistic with an improved depth perception.

  8. FELIX: a volumetric 3D laser display

    NASA Astrophysics Data System (ADS)

    Bahr, Detlef; Langhans, Knut; Gerken, Martin; Vogt, Carsten; Bezecny, Daniel; Homann, Dennis

    1996-03-01

    In this paper, an innovative approach of a true 3D image presentation in a space filling, volumetric laser display will be described. The introduced prototype system is based on a moving target screen that sweeps the display volume. Net result is the optical equivalent of a 3D array of image points illuminated to form a model of the object which occupies a physical space. Wireframe graphics are presented within the display volume which a group of people can walk around and examine simultaneously from nearly any orientation and without any visual aids. Further to the detailed vector scanning mode, a raster scanned system and a combination of both techniques are under development. The volumetric 3D laser display technology for true reproduction of spatial images can tremendously improve the viewers ability to interpret data and to reliably determine distance, shape and orientation. Possible applications for this development range from air traffic control, where moving blips of light represent individual aircrafts in a true to scale projected airspace of an airport, to various medical applications (e.g. electrocardiography, computer-tomography), to entertainment and education visualization as well as imaging in the field of engineering and Computer Aided Design.

  9. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-01

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices. PMID:27321137

  10. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  11. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter. PMID:16238061

  12. Fallon FORGE 3D Geologic Model

    DOE Data Explorer

    Doug Blankenship

    2016-03-01

    An x,y,z scattered data file for the 3D geologic model of the Fallon FORGE site. Model created in Earthvision by Dynamic Graphic Inc. The model was constructed with a grid spacing of 100 m. Geologic surfaces were extrapolated from the input data using a minimum tension gridding algorithm. The data file is tabular data in a text file, with lithology data associated with X,Y,Z grid points. All the relevant information is in the file header (the spatial reference, the projection etc.) In addition all the fields in the data file are identified in the header.

  13. 3D MHD Simulations of Tokamak Disruptions

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Stuber, James

    2014-10-01

    Two disruption scenarios are modeled numerically by use of the CORSICA 2D equilibrium and NIMROD 3D MHD codes. The work follows the simulations of pressure-driven modes in DIII-D and VDEs in ITER. The aim of the work is to provide starting points for simulation of tokamak disruption mitigation techniques currently in the CDR phase for ITER. Pressure-driven instability growth rates previously observed in simulations of DIIID are verified; Halo and Hiro currents produced during vertical displacements are observed in simulations of ITER with implementation of resistive walls in NIMROD. We discuss plans to exercise new code capabilities and validation.

  14. A method for the calibration of 3D ultrasound transducers

    NASA Astrophysics Data System (ADS)

    Hastenteufel, Mark; Mottl-Link, Sibylle; Wolf, Ivo; de Simone, Raffaele; Meinzer, Hans-Peter

    2003-05-01

    Background: Three-dimensional (3D) ultrasound has a great potential in medical diagnostics. However, there are also some limitations of 3D ultrasound, e.g., in some situations morphology cannot be imaged accurately due to acoustical shadows. Acquiring 3D datasets from multiple positions can overcome some of these limitations. Prior to that a calibration of the ultrasound probe is necessary. Most calibration methods descibed rely on two-dimensional data. We describe a calibration method that uses 3D data. Methods: We have developed a 3D calibration method based on single-point cross-wire calibration using registration techniques for automatic detection of cross centers. For the calibration a cross consisting of three orthogonal wires is imaged. A model-to-image registration method is used to determine the cross center. Results: Due to the use of 3D data less acquisitions and no special protocols are necessary. The influence of noise is reduced. By means of the registration method the time-consuming steps of image plane alignment and manual cross center determination becomes dispensable. Conclusion: A 3D calibration method for ultrasound transducers is described. The calibration method is the base to extend state-of-the-art 3D ultrasound devices, i.e., to acquire multiple 3D, either morphological or functional (Doppler), datasets.

  15. Complete 3D model reconstruction from multiple views

    NASA Astrophysics Data System (ADS)

    Lin, Huei-Yung; Subbarao, Murali; Park, Soon-Yong

    2002-02-01

    New algorithms are presented for automatically acquiring the complete 3D model of single and multiple objects using rotational stereo. The object is placed on a rotation stage. Stereo images for several viewing directions are taken by rotating the object by known angles. Partial 3D shapes and the corresponding texture maps are obtained using rotational stereo and shape from focus. First, for each view, shape from focus is used to obtain a rough 3D shape and the corresponding focused image. Then, the rough 3D shape and focused images are used in rotational stereo to obtain a more accurate measurement of 3D shape. The rotation axis is calibrated using three fixed points on a planar object and refined during surface integration. The complete 3D model is reconstructed by integrating partial 3D shapes and the corresponding texture maps of the object from multiple views. New algorithms for range image registration, surface integration and texture mapping are presented. Our method can generate 3D models very fast and preserve the texture of objects. A new prototype vision system named Stonybrook VIsion System 2 (SVIS-2) has been built and used in the experiments. In the experiments, 4 viewing directions at 90-degree intervals are used. SVIS-2 can acquire the 3D model of objects within a 250 mm x 250 mm x 250 mm cubic workspace placed about 750 mm from the camera. Both computational algorithms and experimental results on several objects are presented.

  16. Light field display and 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  17. 3D Wilson cycle: structural inheritance and subduction polarity reversals

    NASA Astrophysics Data System (ADS)

    Beaussier, Stephane; Gerya, Taras; Burg, Jean-Pierre

    2016-04-01

    Many orogenies display along-strike variations in their orogenic wedge geometry. For instance, the Alps is an example of lateral changes in the subducting lithosphere polarity. High resolution tomography has shown that the southeast dipping European lithosphere is separated from the northeast dipping Adriatic lithosphere by a narrow transition zone at about the "Judicarian" line (Kissling et al. 2006). The formation of such 3D variations remains conjectural. We investigate the conditions that can spontaneously induce such lithospheric structures, and intend to identify the main parameters controlling their formation and geometry. Using the 3D thermo-mechanical code, I3ELVIS (Gerya and Yuen 2007) we modelled a Wilson cycle starting from a continental lithosphere in an extensional setting resulting in continental breakup and oceanic spreading. At a later stage, divergence is gradually reversed to convergence, which induce subduction of the oceanic lithosphere formed during oceanic spreading. In this model, all lateral and longitudinal structures of the lithospheres are generated self-consistently, and are consequences of the initial continental structure, tectono-magmatic inheritance, and material rheology. Our numerical simulations point out the control of rheological parameters defining the brittle/plastic yielding conditions for the lithosphere. Formation of several opposing domains of opposing subduction polarity is facilitated by wide and weak oceanic lithospheres. Furthermore, contrasts of strength between the continental and oceanic lithosphere, as well as the angle between the plate suture and the shortening direction have a second order effect on the lateral geometry of the subduction zone. In our numerical experiments systematic lateral changes in the subduction lithosphere polarity during subduction initiation form spontaneously suggesting intrinsic physical origin of this phenomenon. Further studies are necessary to understand why this feature, observed

  18. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  19. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  20. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  1. High resolution 3D fluorescence tomography using ballistic photons

    NASA Astrophysics Data System (ADS)

    Zheng, Jie; Nouizi, Farouk; Cho, Jaedu; Kwong, Jessica; Gulsen, Gultekin

    2015-03-01

    We are developing a ballistic-photon based approach for improving the spatial resolution of fluorescence tomography using time-domain measurements. This approach uses early photon information contained in measured time-of-fight distributions originating from fluorescence emission. The time point spread functions (TPSF) from both excitation light and emission light are acquired with gated single photon Avalanche detector (SPAD) and time-correlated single photon counting after a short laser pulse. To determine the ballistic photons for reconstruction, the lifetime of the fluorophore and the time gate from the excitation profiles will be used for calibration, and then the time gate of the fluorescence profile can be defined by a simple time convolution. By mimicking first generation CT data acquisition, the sourcedetector pair will translate across and also rotate around the subject. The measurement from each source-detector position will be reshaped into a histogram that can be used by a simple back-projection algorithm in order to reconstruct high resolution fluorescence images. Finally, from these 2D sectioning slides, a 3D inclusion can be reconstructed accurately. To validate the approach, simulation of light transport is performed for biological tissue-like media with embedded fluorescent inclusion by solving the diffusion equation with Finite Element Method using COMSOL Multiphysics simulation. The reconstruction results from simulation studies have confirmed that this approach drastically improves the spatial resolution of fluorescence tomography. Moreover, all the results have shown the feasibility of this technique for high resolution small animal imaging up to several centimeters.

  2. Spatially varying regularization of deconvolution in 3D microscopy.

    PubMed

    Seo, J; Hwang, S; Lee, J-M; Park, H

    2014-08-01

    Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.

  3. Microseismic network design assessment based on 3D ray tracing

    NASA Astrophysics Data System (ADS)

    Näsholm, Sven Peter; Wuestefeld, Andreas; Lubrano-Lavadera, Paul; Lang, Dominik; Kaschwich, Tina; Oye, Volker

    2016-04-01

    There is increasing demand on the versatility of microseismic monitoring networks. In early projects, being able to locate any triggers was considered a success. These early successes led to a better understanding of how to extract value from microseismic results. Today operators, regulators, and service providers work closely together in order to find the optimum network design to meet various requirements. In the current study we demonstrate an integrated and streamlined network capability assessment approach. It is intended for use during the microseismic network design process prior to installation. The assessments are derived from 3D ray tracing between a grid of event points and the sensors. Three aspects are discussed: 1) Magnitude of completeness or detection limit; 2) Event location accuracy; and 3) Ground-motion hazard. The network capability parameters 1) and 2) are estimated at all hypothetic event locations and are presented in the form of maps given a seismic sensor coordinate scenario. In addition, the ray tracing traveltimes permit to estimate the point-spread-functions (PSFs) at the event grid points. PSFs are useful in assessing the resolution and focusing capability of the network for stacking-based event location and imaging methods. We estimate the performance for a hypothetical network case with 11 sensors. We consider the well-documented region around the San Andreas Fault Observatory at Depth (SAFOD) located north of Parkfield, California. The ray tracing is done through a detailed velocity model which covers a 26.2 by 21.2 km wide area around the SAFOD drill site with a resolution of 200 m both for the P-and S-wave velocities. Systematic network capability assessment for different sensor site scenarios prior to installation facilitates finding a final design which meets the survey objectives.

  4. Navigation in Orthognathic Surgery: 3D Accuracy.

    PubMed

    Badiali, Giovanni; Roncari, Andrea; Bianchi, Alberto; Taddei, Fulvia; Marchetti, Claudio; Schileo, Enrico

    2015-10-01

    This article aims to determine the absolute accuracy of maxillary repositioning during orthognathic surgery according to simulation-guided navigation, that is, the combination of navigation and three-dimensional (3D) virtual surgery. We retrospectively studied 15 patients treated for asymmetric dentofacial deformities at the Oral and Maxillofacial Surgery Unit of the S.Orsola-Malpighi University Hospital in Bologna, Italy, from January 2010 to January 2012. Patients were scanned with a cone-beam computed tomography before and after surgery. The virtual surgical simulation was realized with a dedicated software and loaded on a navigation system to improve intraoperative reproducibility of the preoperative planning. We analyzed the outcome following two protocols: (1) planning versus postoperative 3D surface analysis; (2) planning versus postoperative point-based analysis. For 3D surface comparison, the mean Hausdorff distance was measured, and median among cases was 0.99 mm. Median reproducibility < 1 mm was 61.88% and median reproducibility < 2 mm was 85.46%. For the point-based analysis, with sign, the median distance was 0.75 mm in the frontal axis, -0.05 mm in the caudal-cranial axis, -0.35 mm in the lateral axis. In absolute value, the median distance was 1.19 mm in the frontal axis, 0.59 mm in the caudal-cranial axis, and 1.02 mm in the lateral axis. We suggest that simulation-guided navigation makes accurate postoperative outcomes possible for maxillary repositioning in orthognathic surgery, if compared with the surgical computer-designed project realized with a dedicated software, particularly for the vertical dimension, which is the most challenging to manage.

  5. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  6. Visualization of 2-D and 3-D Tensor Fields

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1997-01-01

    In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyper-streamlines about these points give rise to tri-sector or wedge points. These singularities and their connecting hyper-streamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find , analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.

  7. Visualization of 2-D and 3-D Tensor Fields

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1995-01-01

    In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyperstreamlines about these points give rise to trisector or wedge points. These singularities and their connecting hyperstreamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find, analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.

  8. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  9. Gravitation in 3D Spacetime

    NASA Astrophysics Data System (ADS)

    Laubenstein, John; Cockream, Kandi

    2009-05-01

    3D spacetime was developed by the IWPD Scale Metrics (SM) team using a coordinate system that translates n dimensions to n-1. 4-vectors are expressed in 3D along with a scaling factor representing time. Time is not orthogonal to the three spatial dimensions, but rather in alignment with an object's axis-of-motion. We have defined this effect as the object's ``orientation'' (X). The SM orientation (X) is equivalent to the orientation of the 4-velocity vector positioned tangent to its worldline, where X-1=θ+1 and θ is the angle of the 4-vector relative to the axis-of -motion. Both 4-vectors and SM appear to represent valid conceptualizations of the relationship between space and time. Why entertain SM? Scale Metrics gravity is quantized and may suggest a path for the full unification of gravitation with quantum theory. SM has been tested against current observation and is in agreement with the age of the universe, suggests a physical relationship between dark energy and dark matter, is in agreement with the accelerating expansion rate of the universe, contributes to the understanding of the fine-structure constant and provides a physical explanation of relativistic effects.

  10. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  11. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  12. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  13. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  14. GalPak3D: A Bayesian Parametric Tool for Extracting Morphokinematics of Galaxies from 3D Data

    NASA Astrophysics Data System (ADS)

    Bouché, N.; Carfantan, H.; Schroetter, I.; Michel-Dansac, L.; Contini, T.

    2015-09-01

    We present a method to constrain galaxy parameters directly from three-dimensional data cubes. The algorithm compares directly the data with a parametric model mapped in x,y,λ coordinates. It uses the spectral line-spread function and the spatial point-spread function (PSF) to generate a three-dimensional kernel whose characteristics are instrument specific or user generated. The algorithm returns the intrinsic modeled properties along with both an “intrinsic” model data cube and the modeled galaxy convolved with the 3D kernel. The algorithm uses a Markov Chain Monte Carlo approach with a nontraditional proposal distribution in order to efficiently probe the parameter space. We demonstrate the robustness of the algorithm using 1728 mock galaxies and galaxies generated from hydrodynamical simulations in various seeing conditions from 0.″6 to 1.″2. We find that the algorithm can recover the morphological parameters (inclination, position angle) to within 10% and the kinematic parameters (maximum rotation velocity) to within 20%, irrespectively of the PSF in seeing (up to 1.″2) provided that the maximum signal-to-noise ratio (S/N) is greater than ∼3 pixel‑1 and that the ratio of galaxy half-light radius to seeing radius is greater than about 1.5. One can use such an algorithm to constrain simultaneously the kinematics and morphological parameters of (nonmerging) galaxies observed in nonoptimal seeing conditions. The algorithm can also be used on adaptive optics data or on high-quality, high-S/N data to look for nonaxisymmetric structures in the residuals.

  15. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  16. Restructuring of RELAP5-3D

    SciTech Connect

    George Mesina; Joshua Hykes

    2005-09-01

    The RELAP5-3D source code is unstructured with many interwoven logic flow paths. By restructuring the code, it becomes easier to read and understand, which reduces the time and money required for code development, debugging, and maintenance. A structured program is comprised of blocks of code with one entry and exit point and downward logic flow. IF tests and DO loops inherently create structured code, while GOTO statements are the main cause of unstructured code. FOR_STRUCT is a commercial software package that converts unstructured FORTRAN into structured programming; it was used to restructure individual subroutines. Primarily it transforms GOTO statements, ARITHMETIC IF statements, and COMPUTED GOTO statements into IF-ELSEIF-ELSE tests and DO loops. The complexity of RELAP5-3D complicated the task. First, FOR_STRUCT cannot completely restructure all the complex coding contained in RELAP5-3D. An iterative approach of multiple FOR_STRUCT applications gave some additional improvements. Second, FOR_STRUCT cannot restructure FORTRAN 90 coding, and RELAP5-3D is partially written in FORTRAN 90. Unix scripts for pre-processing subroutines into coding that FOR_STRUCT could handle and post-processing it back into FORTRAN 90 were written. Finally, FOR_STRUCT does not have the ability to restructure the RELAP5-3D code which contains pre-compiler directives. Variations of a file were processed with different pre-compiler options switched on or off, ensuring that every block of code was restructured. Then the variations were recombined to create a completely restructured source file. Unix scripts were written to perform these tasks, as well as to make some minor formatting improvements. In total, 447 files comprising some 180,000 lines of FORTRAN code were restructured. These showed significant reduction in the number of logic jumps contained as measured by reduction in the number of GOTO statements and line labels. The average number of GOTO statements per subroutine

  17. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  18. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  19. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  20. 3D-HST WFC3-selected Photometric Catalogs in the Five CANDELS/3D-HST Fields: Photometry, Photometric Redshifts, and Stellar Masses

    NASA Astrophysics Data System (ADS)

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; van der Wel, Arjen; Bezanson, Rachel; Da Cunha, Elisabete; Fumagalli, Mattia; Förster Schreiber, Natascha; Kriek, Mariska; Leja, Joel; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; Maseda, Michael V.; Nelson, Erica J.; Oesch, Pascal; Pacifici, Camilla; Patel, Shannon G.; Price, Sedona; Rix, Hans-Walter; Tal, Tomer; Wake, David A.; Wuyts, Stijn

    2014-10-01

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin2 in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu).

  1. 3D-HST WFC3-SELECTED PHOTOMETRIC CATALOGS IN THE FIVE CANDELS/3D-HST FIELDS: PHOTOMETRY, PHOTOMETRIC REDSHIFTS, AND STELLAR MASSES

    SciTech Connect

    Skelton, Rosalind E.; Whitaker, Katherine E.; Momcheva, Ivelina G.; Van Dokkum, Pieter G.; Bezanson, Rachel; Leja, Joel; Nelson, Erica J.; Oesch, Pascal; Brammer, Gabriel B.; Labbé, Ivo; Franx, Marijn; Fumagalli, Mattia; Van der Wel, Arjen; Da Cunha, Elisabete; Maseda, Michael V.; Förster Schreiber, Natascha; Kriek, Mariska; Lundgren, Britt F.; Magee, Daniel; Marchesini, Danilo; and others

    2014-10-01

    The 3D-HST and CANDELS programs have provided WFC3 and ACS spectroscopy and photometry over ≈900 arcmin{sup 2} in five fields: AEGIS, COSMOS, GOODS-North, GOODS-South, and the UKIDSS UDS field. All these fields have a wealth of publicly available imaging data sets in addition to the Hubble Space Telescope (HST) data, which makes it possible to construct the spectral energy distributions (SEDs) of objects over a wide wavelength range. In this paper we describe a photometric analysis of the CANDELS and 3D-HST HST imaging and the ancillary imaging data at wavelengths 0.3-8 μm. Objects were selected in the WFC3 near-IR bands, and their SEDs were determined by carefully taking the effects of the point-spread function in each observation into account. A total of 147 distinct imaging data sets were used in the analysis. The photometry is made available in the form of six catalogs: one for each field, as well as a master catalog containing all objects in the entire survey. We also provide derived data products: photometric redshifts, determined with the EAZY code, and stellar population parameters determined with the FAST code. We make all the imaging data that were used in the analysis available, including our reductions of the WFC3 imaging in all five fields. 3D-HST is a spectroscopic survey with the WFC3 and ACS grisms, and the photometric catalogs presented here constitute a necessary first step in the analysis of these grism data. All the data presented in this paper are available through the 3D-HST Web site (http://3dhst.research.yale.edu)

  2. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  3. Conducting Polymer 3D Microelectrodes

    PubMed Central

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; Castillo-León, Jaime; Emnéus, Jenny; Svendsen, Winnie E.

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. PMID:22163508

  4. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  5. Determination of the mutual coherence function and determination of the point-spread function in a transversely and longitudinally inhomogeneous aero-optic turbulence layer.

    PubMed

    Monteiro, A; Jarem, J

    1993-01-10

    The mutual coherence function (MCF) of strong-fluctuation theory as a result of optical energy passing through a transversely and longitudinally inhomogeneous aero-optic turbulent layer is studied. Solutions for the MCF equation are determined by decomposing the MCF solution into coherent and incoherent parts and by solving separately the equations that result from this decomposition. The MCF equations for an arbitrary three-dimensional inhomogeneous layer are presented. A simplified version of these equations for the case in which the turbulence inhomogeneity is longitudinally inhomogeneous and is transversely inhomogeneous in one dimension is also presented. A numerical method for solving the parabolic MCF equations by the Lax-Wendroff explicit finite-difference algorithm is given, and numerical examples of the MCF solution for three different inhomogeneous aero-optic layers are discussed. Equations to relate the point-spread function, the optical transfer function, and image formation to the MCF of an inhomogeneous aero-optic turbulence layer are derived. An approximate MCF Fourier integral solution is presented and compared with the exact finite-difference solution. A formula to estimate the validity of the approximate integral solution is given. PMID:20802679

  6. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available. PMID:15619842

  7. A new method for point-spread function correction using the ellipticity of re-smeared artificial images in weak gravitational lensing shear analysis

    SciTech Connect

    Okura, Yuki; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2014-09-10

    Highly accurate weak lensing analysis is urgently required for planned cosmic shear observations. For this purpose we have eliminated various systematic noises in the measurement. The point-spread function (PSF) effect is one of them. A perturbative approach for correcting the PSF effect on the observed image ellipticities has been previously employed. Here we propose a new non-perturbative approach for PSF correction that avoids the systematic error associated with the perturbative approach. The new method uses an artificial image for measuring shear which has the same ellipticity as the lensed image. This is done by re-smearing the observed galaxy images and observed star images (PSF) with an additional smearing function to obtain the original lensed galaxy images. We tested the new method with simple simulated objects that have Gaussian or Sérsic profiles smeared by a Gaussian PSF with sufficiently large size to neglect pixelization. Under the condition of no pixel noise, it is confirmed that the new method has no systematic error even if the PSF is large and has a high ellipticity.

  8. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  9. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.

  10. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  11. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  12. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  13. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  14. Acquiring 3-D Spatial Data Of A Real Object

    NASA Astrophysics Data System (ADS)

    Wu, C. K.; Wang, D. Q.; Bajcsy, R. K...

    1983-10-01

    A method of acquiring spatial data of a real object via a stereometric system is presented. Three-dimensional (3-D) data of an object are acquired by: (1) camera calibration; (2) stereo matching; (3) multiple stereo views covering the whole object; (4) geometrical computations to determine the 3-D coordinates for each sample point of the object. The analysis and the experimental results indicate the method implemented is capable of measuring the spatial data of a real object with satisfactory accuracy.

  15. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  16. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  17. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  18. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  19. Parallel algorithm for computing 3-D reachable workspaces

    NASA Astrophysics Data System (ADS)

    Alameldin, Tarek K.; Sobh, Tarek M.

    1992-03-01

    The problem of computing the 3-D workspace for redundant articulated chains has applications in a variety of fields such as robotics, computer aided design, and computer graphics. The computational complexity of the workspace problem is at least NP-hard. The recent advent of parallel computers has made practical solutions for the workspace problem possible. Parallel algorithms for computing the 3-D workspace for redundant articulated chains with joint limits are presented. The first phase of these algorithms computes workspace points in parallel. The second phase uses workspace points that are computed in the first phase and fits a 3-D surface around the volume that encompasses the workspace points. The second phase also maps the 3- D points into slices, uses region filling to detect the holes and voids in the workspace, extracts the workspace boundary points by testing the neighboring cells, and tiles the consecutive contours with triangles. The proposed algorithms are efficient for computing the 3-D reachable workspace for articulated linkages, not only those with redundant degrees of freedom but also those with joint limits.

  20. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  1. Conjugate Point Equatorial Experiment (COPEX) campaign in Brazil: Electrodynamics highlights on spreadFdevelopment conditions and day-to-day variability

    NASA Astrophysics Data System (ADS)

    Abdu, M. A.; Batista, I. S.; Reinisch, B. W.; de Souza, J. R.; Sobral, J. H. A.; Pedersen, T. R.; Medeiros, A. F.; Schuch, N. J.; de Paula, E. R.; Groves, K. M.

    2009-04-01

    A Conjugate Point Equatorial Experiment (COPEX) campaign was conducted during the October-December 2002 period in Brazil, with the objective to investigate the equatorial spread F/plasma bubble irregularity (ESF) development conditions in terms of the electrodynamical state of the ionosphere along the magnetic flux tubes in which they occur. A network of instruments, including Digisondes, optical imagers, and GPS receivers, was deployed at magnetic conjugate and dip equatorial locations in a geometry that permitted field line mapping of the conjugate E layers to dip equatorial F layer bottomside. We analyze in this paper the extensive Digisonde data from the COPEX stations, complemented by limited all-sky imager conjugate point observations. The Sheffield University Plasmasphere-Ionosphere Model (SUPIM) is used to assess the transequatorial winds (TEW) as inferred from the observed difference of h m F 2 at the conjugate sites. New results and evidence on the ESF development conditions and the related ambient electrodynamic processes from this study can be highlighted as follows: (1) large-scale bottomside wave structures/satellite traces at the equator followed by their simultaneous appearance at conjugate sites are shown to be indicative of the ESF instability initiation; (2) the evening prereversal electric field enhancement (PRE)/vertical drift presents systematic control on the time delay in SF onset at off-equatorial sites indicative of the vertical bubble growth, under weak transequatorial wind; (3) the PRE presents a large latitude/height gradient in the Brazilian sector; (4) conjugate point symmetry/asymmetry of large-scale plasma depletions versus smaller-scale structures is revealed; and (5) while transequatorial winds seem to suppress ESF development in a case study, the medium-term trend in the ESF seems to be controlled more by the variation in the PRE than in the TEW during the COPEX period. Competing influences of the evening vertical plasma drift in

  2. Point Spread Function and Transmittance Analyses for Conical and Hexapod Secondary Mirror Support Towers for the Next Generation Space Telescope (NGST)

    NASA Technical Reports Server (NTRS)

    Wilkerson, Gary W.; Pitalo, Stephen K.

    1999-01-01

    Different secondary mirror support towers were modeled on the CODE V optical design/analysis program for the NGST Optical Telescope Assembly (OTA) B. The vertices of the NGST OTA B primary and secondary mirrors were separated by close to 9.0 m. One type of tower consisted of a hollow cone 6.0 m long, 2.00 m in diameter at the base, and 0.704 m in diameter at its top. The base of the cone was considered attached to the primary's reaction structure through a hole in the primary. Extending up parallel to the optical axis from the top of this cylinder were eight blades (pyramidal struts) 3.0 m long. A cross section of each these long blades was an isosceles triangle with a base of 0.010 m and a height of 0.100 m with the sharpest part of each triangle pointing inward. The eight struts occurred every 45 deg. The other type of tower was purely a hexapod arrangement and had no blades or cones. The hexapod consisted simply of six, very thin, circular struts, leaving in pairs at 12:00, 4:00, and 8:00 at the primary and traversing to the outer edge of the back of the secondary mount. At this mount, two struts arrived at each of 10:00, 2:00, and 6:00. The struts were attached to the primary mirror in a ring 3.5 m in diameter. They reached the back of the secondary mount, a circle 0.704 m in diameter. Transmittance analyses at two levels were performed on the secondary mirror support towers. Detailed transmittances were accomplished by the use of the CODE V optical design/analysis program and were compared to transmittance calculations that were almost back-of-the-envelope. Point spread function (PSF) calculations, including both diffraction and aberration effects, were performed on CODE V. As one goes out from the center of the blur (for a point source), the two types of support towers showed little difference between their PSF intensities until one reaches about the 3% level. Contours can be delineated on CODE V down to about 10 (exp -8) times the peak intensity, fine

  3. Generation and Comparison of Tls and SFM Based 3d Models of Solid Shapes in Hydromechanic Research

    NASA Astrophysics Data System (ADS)

    Zhang, R.; Schneider, D.; Strauß, B.

    2016-06-01

    The aim of a current study at the Institute of Hydraulic Engineering and Technical Hydromechanics at TU Dresden is to develop a new injection method for quick and economic sealing of dikes or dike bodies, based on a new synthetic material. To validate the technique, an artificial part of a sand dike was built in an experimental hall. The synthetic material was injected, which afterwards spreads in the inside of the dike. After the material was fully solidified, the surrounding sand was removed with an excavator. In this paper, two methods, which applied terrestrial laser scanning (TLS) and structure from motion (SfM) respectively, for the acquisition of a 3D point cloud of the remaining shapes are described and compared. Combining with advanced software packages, a triangulated 3D model was generated and subsequently the volume of vertical sections of the shape were calculated. As the calculation of the volume revealed differences between the TLS and the SfM 3D model, a thorough qualitative comparison of the two models will be presented as well as a detailed accuracy assessment. The main influence of the accuracy is caused by generalisation in case of gaps due to occlusions in the 3D point cloud. Therefore, improvements for the data acquisition with TLS and SfM for such kind of objects are suggested in the paper.

  4. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  5. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  6. 3D cell entrapment in crosslinked thiolated gelatin-poly(ethylene glycol) diacrylate hydrogels.

    PubMed

    Fu, Yao; Xu, Kedi; Zheng, Xiaoxiang; Giacomin, Alan J; Mix, Adam W; Kao, Weiyuan J

    2012-01-01

    The combined use of natural ECM components and synthetic materials offers an attractive alternative to fabricate hydrogel-based tissue engineering scaffolds to study cell-matrix interactions in three-dimensions (3D). A facile method was developed to modify gelatin with cysteine via a bifunctional PEG linker, thus introducing free thiol groups to gelatin chains. A covalently crosslinked gelatin hydrogel was fabricated using thiolated gelatin and poly(ethylene glycol) diacrylate (PEGdA) via thiol-ene reaction. Unmodified gelatin was physically incorporated in a PEGdA-only matrix for comparison. We sought to understand the effect of crosslinking modality on hydrogel physicochemical properties and the impact on 3D cell entrapment. Compared to physically incorporated gelatin hydrogels, covalently crosslinked gelatin hydrogels displayed higher maximum weight swelling ratio (Q(max)), higher water content, significantly lower cumulative gelatin dissolution up to 7 days, and lower gel stiffness. Furthermore, fibroblasts encapsulated within covalently crosslinked gelatin hydrogels showed extensive cytoplasmic spreading and the formation of cellular networks over 28 days. In contrast, fibroblasts encapsulated in the physically incorporated gelatin hydrogels remained spheroidal. Hence, crosslinking ECM protein with synthetic matrix creates a stable scaffold with tunable mechanical properties and with long-term cell anchorage points, thus supporting cell attachment and growth in the 3D environment.

  7. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  8. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  9. Dynamic phototuning of 3D hydrogel stiffness

    PubMed Central

    Stowers, Ryan S.; Allen, Shane C.; Suggs, Laura J.

    2015-01-01

    Hydrogels are widely used as in vitro culture models to mimic 3D cellular microenvironments. The stiffness of the extracellular matrix is known to influence cell phenotype, inspiring work toward unraveling the role of stiffness on cell behavior using hydrogels. However, in many biological processes such as embryonic development, wound healing, and tumorigenesis, the microenvironment is highly dynamic, leading to changes in matrix stiffness over a broad range of timescales. To recapitulate dynamic microenvironments, a hydrogel with temporally tunable stiffness is needed. Here, we present a system in which alginate gel stiffness can be temporally modulated by light-triggered release of calcium or a chelator from liposomes. Others have shown softening via photodegradation or stiffening via secondary cross-linking; however, our system is capable of both dynamic stiffening and softening. Dynamic modulation of stiffness can be induced at least 14 d after gelation and can be spatially controlled to produce gradients and patterns. We use this system to investigate the regulation of fibroblast morphology by stiffness in both nondegradable gels and gels with degradable elements. Interestingly, stiffening inhibits fibroblast spreading through either mesenchymal or amoeboid migration modes. We demonstrate this technology can be translated in vivo by using deeply penetrating near-infrared light for transdermal stiffness modulation, enabling external control of gel stiffness. Temporal modulation of hydrogel stiffness is a powerful tool that will enable investigation of the role that dynamic microenvironments play in biological processes both in vitro and in well-controlled in vivo experiments. PMID:25646417

  10. Evaluation of segmented 3D acquisition schemes for whole-brain high-resolution arterial spin labeling at 3 T.

    PubMed

    Vidorreta, Marta; Balteau, Evelyne; Wang, Ze; De Vita, Enrico; Pastor, María A; Thomas, David L; Detre, John A; Fernández-Seara, María A

    2014-11-01

    Recent technical developments have significantly increased the signal-to-noise ratio (SNR) of arterial spin labeled (ASL) perfusion MRI. Despite this, typical ASL acquisitions still employ large voxel sizes. The purpose of this work was to implement and evaluate two ASL sequences optimized for whole-brain high-resolution perfusion imaging, combining pseudo-continuous ASL (pCASL), background suppression (BS) and 3D segmented readouts, with different in-plane k-space trajectories. Identical labeling and BS pulses were implemented for both sequences. Two segmented 3D readout schemes with different in-plane trajectories were compared: Cartesian (3D GRASE) and spiral (3D RARE Stack-Of-Spirals). High-resolution perfusion images (2 × 2 × 4 mm(3) ) were acquired in 15 young healthy volunteers with the two ASL sequences at 3 T. The quality of the perfusion maps was evaluated in terms of SNR and gray-to-white matter contrast. Point-spread-function simulations were carried out to assess the impact of readout differences on the effective resolution. The combination of pCASL, in-plane segmented 3D readouts and BS provided high-SNR high-resolution ASL perfusion images of the whole brain. Although both sequences produced excellent image quality, the 3D RARE Stack-Of-Spirals readout yielded higher temporal and spatial SNR than 3D GRASE (spatial SNR = 8.5 ± 2.8 and 3.7 ± 1.4; temporal SNR = 27.4 ± 12.5 and 15.6 ± 7.6, respectively) and decreased through-plane blurring due to its inherent oversampling of the central k-space region, its reduced effective TE and shorter total readout time, at the expense of a slight increase in the effective in-plane voxel size. PMID:25263944

  11. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  12. 3D model of bow shocks

    NASA Astrophysics Data System (ADS)

    Gustafsson, M.; Ravkilde, T.; Kristensen, L. E.; Cabrit, S.; Field, D.; Pineau Des Forêts, G.

    2010-04-01

    Context. Shocks produced by outflows from young stars are often observed as bow-shaped structures in which the H2 line strength and morphology are characteristic of the physical and chemical environments and the velocity of the impact. Aims: We present a 3D model of interstellar bow shocks propagating in a homogeneous molecular medium with a uniform magnetic field. The model enables us to estimate the shock conditions in observed flows. As an example, we show how the model can reproduce rovibrational H2 observations of a bow shock in OMC1. Methods: The 3D model is constructed by associating a planar shock with every point on a 3D bow skeleton. The planar shocks are modelled with a highly sophisticated chemical reaction network that is essential for predicting accurate shock widths and line emissions. The shock conditions vary along the bow surface and determine the shock type, the local thickness, and brightness of the bow shell. The motion of the cooling gas parallel to the bow surface is also considered. The bow shock can move at an arbitrary inclination to the magnetic field and to the observer, and we model the projected morphology and radial velocity distribution in the plane-of-sky. Results: The morphology of a bow shock is highly dependent on the orientation of the magnetic field and the inclination of the flow. Bow shocks can appear in many different guises and do not necessarily show a characteristic bow shape. The ratio of the H2 v = 2-1 S(1) line to the v = 1-0 S(1) line is variable across the flow and the spatial offset between the peaks of the lines may be used to estimate the inclination of the flow. The radial velocity comes to a maximum behind the apparent apex of the bow shock when the flow is seen at an inclination different from face-on. Under certain circumstances the radial velocity of an expanding bow shock can show the same signatures as a rotating flow. In this case a velocity gradient perpendicular to the outflow direction is a projection

  13. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  14. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion b