Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F
2016-01-01
Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312
NASA Astrophysics Data System (ADS)
Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.
2016-08-01
Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.
Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.
2016-01-01
Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312
Isotropic 3D Super-resolution Imaging with a Self-bending Point Spread Function
Jia, Shu; Vaughan, Joshua C.; Zhuang, Xiaowei
2014-01-01
Airy beams maintain their intensity profiles over a large propagation distance without substantial diffraction and exhibit lateral bending during propagation1-5. This unique property has been exploited for micromanipulation of particles6, generation of plasma channels7 and guidance of plasmonic waves8, but has not been explored for high-resolution optical microscopy. Here, we introduce a self-bending point spread function (SB-PSF) based on Airy beams for three-dimensional (3D) super-resolution fluorescence imaging. We designed a side-lobe-free SB-PSF and implemented a two-channel detection scheme to enable unambiguous 3D localization of fluorescent molecules. The lack of diffraction and the propagation-dependent lateral bending make the SB-PSF well suited for precise 3D localization of molecules over a large imaging depth. Using this method, we obtained super-resolution imaging with isotropic 3D localization precision of 10-15 nm over a 3 μm imaging depth from ∼2000 photons per localization. PMID:25383090
NASA Astrophysics Data System (ADS)
Makowski, Michal; Petelczyc, Krzysztof; Kolodziejczyk, Andrzej; Jaroszewicz, Zbigniew; Ducin, Izabela; Kakarenko, Karol; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz
2010-12-01
The experimental demonstration of a blind deconvolution method on an imaging system with a Light Sword optical element (LSOE) used instead of a lens. Try-and-error deconvolution of known Point Spread Functions (PSF) from an input image captured on a single CCD camera is done. By establishing the optimal PSF providing the optimal contrast of optotypes seen in a frame, one can know the defocus parameter and hence the object distance. Therefore with a single exposure on a standard CCD camera we gain information on the depth of a 3-D scene. Exemplary results for a simple scene containing three optotypes at three distances from the imaging element are presented.
Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.
2014-01-01
Single-molecule-based super-resolution fluorescence microscopy has recently been developed to surpass the diffraction limit by roughly an order of magnitude. These methods depend on the ability to precisely and accurately measure the position of a single-molecule emitter, typically by fitting its emission pattern to a symmetric estimator (e.g. centroid or 2D Gaussian). However, single-molecule emission patterns are not isotropic, and depend highly on the orientation of the molecule’s transition dipole moment, as well as its z-position. Failure to account for this fact can result in localization errors on the order of tens of nm for in-focus images, and ~50–200 nm for molecules at modest defocus. The latter range becomes especially important for three-dimensional (3D) single-molecule super-resolution techniques, which typically employ depths-of-field of up to ~2 μm. To address this issue we report the simultaneous measurement of precise and accurate 3D single-molecule position and 3D dipole orientation using the Double-Helix Point Spread Function (DH-PSF) microscope. We are thus able to significantly improve dipole-induced position errors, reducing standard deviations in lateral localization from ~2x worse than photon-limited precision (48 nm vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation we are able to improve from a lateral standard deviation of 116 nm (~4x worse than the precision, 28 nm) to 34 nm (within 6 nm). PMID:24817798
NIF Ignition Target 3D Point Design
Jones, O; Marinak, M; Milovich, J; Callahan, D
2008-11-05
We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.
Vector quantization of 3-D point clouds
NASA Astrophysics Data System (ADS)
Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk
2005-10-01
A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.
3D scene reconstruction based on 3D laser point cloud combining UAV images
NASA Astrophysics Data System (ADS)
Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen
2016-03-01
It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.
Point Cloud Visualization in AN Open Source 3d Globe
NASA Astrophysics Data System (ADS)
De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.
2011-09-01
During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
3D Building Reconstruction Using Dense Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.
2016-06-01
Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan
2009-02-01
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
Iterative closest normal point for 3D face recognition.
Mohammadzade, Hoda; Hatzinakos, Dimitrios
2013-02-01
The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database. PMID:22585097
Automated Identification of Fiducial Points on 3D Torso Images
Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A
2013-01-01
Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903
3-D Object Recognition from Point Cloud Data
NASA Astrophysics Data System (ADS)
Smith, W.; Walker, A. S.; Zhang, B.
2011-09-01
The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case
Compression of point-texture 3D motion sequences
NASA Astrophysics Data System (ADS)
Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk
2005-10-01
In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.
Interpolating point spread function anisotropy
NASA Astrophysics Data System (ADS)
Gentile, M.; Courbin, F.; Meylan, G.
2013-01-01
Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). In all our methods we model the PSF using a single Moffat profile and we interpolate the fitted parameters at a set of required positions. This allowed us to win the Star-challenge of GREAT10, with the B-splines method. However, we also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σ2sys better than the 1
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
Comparison of 3D interest point detectors and descriptors for point cloud fusion
NASA Astrophysics Data System (ADS)
Hänsch, R.; Weber, T.; Hellwich, O.
2014-08-01
The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.
Secure 3D watermarking algorithm based on point set projection
NASA Astrophysics Data System (ADS)
Liu, Quan; Zhang, Xiaomei
2007-11-01
3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.
NASA Astrophysics Data System (ADS)
Ridene, T.; Goulette, F.; Chendeb, S.
2013-08-01
The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.
Lee, Ho-Joon; Sen, Atanu; Bae, Sooneon; Lee, Jeoung Soo; Webb, Ken
2015-01-01
To serve as artificial matrices for therapeutic cell transplantation, synthetic hydrogels must incorporate mechanisms enabling localized, cell-mediated degradation that allows cell spreading and migration. Previously, we have shown that hybrid semi-interpenetrating polymer networks (semi-IPNs) composed of hydrolytically degradable PEG-diacrylates (PEGdA), acrylate-PEG-GRGDS, and native hyaluronic acid (HA) support increased cell spreading relative to fully synthetic networks that is dependent on cellular hyaluronidase activity. This study systematically investigated the effects of PEGdA/HA semi-IPN network composition on 3D spreading of encapsulated fibroblasts, the underlying changes in gel structure responsible for this activity, and the ability of optimized gel formulations to support long-term cell survival and migration. Fibroblast spreading exhibited a biphasic response to HA concentration, required a minimum HA molecular weight, decreased with increasing PEGdA concentration, and was independent of hydrolytic degradation at early time points. Increased gel turbidity was observed in semi-IPNs, but not in copolymerized hydrogels containing methacrylated HA that did not support cell spreading; suggesting an underlying mechanism of polymerization-induced phase separation resulting in HA-enriched defects within the network structure. PEGdA/HA semi-IPNs were also able to support cell spreading at relatively high levels of mechanical properties (~10 kPa elastic modulus) compared to alternative hybrid hydrogels. In order to support long-term cellular remodeling, the degradation rate of the PEGdA component was optimized by preparing blends of three different PEGdA macromers with varying susceptibility to hydrolytic degradation. Optimized semi-IPN formulations supported long-term survival of encapsulated fibroblasts and sustained migration in a gel-within-gel encapsulation model. These results demonstrate that PEGdA/HA semi-IPNs provide dynamic microenvironments that
Miron-Mendoza, Miguel; Lin, Xihui; Ma, Lisha; Ririe, Peter; Petroll, W. Matthew
2012-01-01
Extracellular matrix (ECM) supplies both physical and chemical signals to cells and provides a substrate through which fibroblasts migrate during wound repair. To directly assess how ECM composition regulates this process, we used a nested 3D matrix model in which cell-populated collagen buttons were embedded in cell-free collagen or fibrin matrices. Time-lapse microscopy was used to record the dynamic pattern of cell migration into the outer matrices, and 3-D confocal imaging was used to assess cell connectivity and cytoskeletal organization. Corneal fibroblasts stimulated with PDGF migrated more rapidly into collagen as compared to fibrin. In addition, the pattern of fibroblast migration into fibrin and collagen ECMs was strikingly different. Corneal fibroblasts migrating into collagen matrices developed dendritic processes and moved independently, whereas cells migrating into fibrin matrices had a more fusiform morphology and formed an interconnected meshwork. A similar pattern was observed when using dermal fibroblasts, suggesting that this response in not unique to corneal cells. We next cultured corneal fibroblasts within and on top of standard collagen and fibrin matrices to assess the impact of ECM composition on the cell spreading response. Similar differences in cell morphology and connectivity were observed – cells remained separated on collagen but coalesced into clusters on fibrin. Cadherin was localized to junctions between interconnected cells, whereas fibronectin was present both between cells and at the tips of extending cell processes. Cells on fibrin matrices also developed more prominent stress fibers than those on collagen matrices. Importantly, these spreading and migration patterns were consistently observed on both rigid and compliant substrates, thus differences in ECM mechanical stiffness were not the underlying cause. Overall, these results demonstrate for the first time that ECM protein composition alone (collagen vs. fibrin) can
Petroll, W. Matthew; Ma, Lisha; Kim, Areum; Ly, Linda; Vishwanath, Mridula
2009-01-01
The goal of this study was to determine the morphological and sub-cellular mechanical effects of Rac activation on fibroblasts within 3-D collagen matrices. Corneal fibroblasts were plated at low density inside 100 μm thick fibrillar collagen matrices and cultured for 1 to 2 days in serum-free media. Time-lapse imaging was then performed using Nomarski DIC. After an acclimation period, perfusion was switched to media containing PDGF. In some experiments, Y-27632 or blebbistatin were used to inhibit Rho-kinase (ROCK) or myosin II, respectively. PDGF activated Rac and induced cell spreading, which resulted in an increase in cell length, cell area, and the number of pseudopodial processes. Tractional forces were generated by extending pseudopodia, as indicated by centripetal displacement and realignment of collagen fibrils. Interestingly, the pattern of pseudopodial extension and local collagen fibril realignment was highly dependent upon the initial orientation of fibrils at the leading edge. Following ROCK or myosin II inhibition, significant ECM relaxation was observed, but small displacements of collagen fibrils continued to be detected at the tips of pseudopodia. Taken together, the data suggests that during Rac-induced cell spreading within 3-D matrices, there is a shift in the distribution of forces from the center to the periphery of corneal fibroblasts. ROCK mediates the generation of large myosin II-based tractional forces during cell spreading within 3-D collagen matrices, however residual forces can be generated at the tips of extending pseudopodia that are both ROCK and myosin II-independent. PMID:18452153
Sensitivity of power and RMS delay spread predictions of a 3D indoor ray tracing model.
Liu, Zhong-Yu; Guo, Li-Xin; Li, Chang-Long; Wang, Qiang; Zhao, Zhen-Wei
2016-06-13
This study investigates the sensitivity of a three-dimensional (3D) indoor ray tracing (RT) model for the use of the uniform theory of diffraction and geometrical optics in radio channel characterizations of indoor environments. Under complex indoor environments, RT-based predictions require detailed and accurate databases of indoor object layouts and the electrical characteristics of such environments. The aim of this study is to assist in selecting the appropriate level of accuracy required in indoor databases to achieve good trade-offs between database costs and prediction accuracy. This study focuses on the effects of errors in indoor environments on prediction results. In studying the effects of inaccuracies in geometry information (indoor object layout) on power coverage prediction, two types of artificial erroneous indoor maps are used. Moreover, a systematic analysis is performed by comparing the predictions with erroneous indoor maps and those with the original indoor map. Subsequently, the influence of random errors on RMS delay spread results is investigated. Given the effect of electrical parameters on the accuracy of the predicted results of the 3D RT model, the relative permittivity and conductivity of different fractions of an indoor environment are set with different values. Five types of computer simulations are considered, and for each type, the received power and RMS delay spread under the same circumstances are simulated with the RT model. PMID:27410335
Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups
ERIC Educational Resources Information Center
Casas, Lluís; Estop, Euge`nia
2015-01-01
Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…
The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models
NASA Astrophysics Data System (ADS)
Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain
2014-05-01
The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail
NASA Astrophysics Data System (ADS)
Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen
2016-06-01
Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.
NASA Astrophysics Data System (ADS)
Lague, D.; Brodu, N.; Leroux, J.
2012-12-01
Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi
Filtering method for 3D laser scanning point cloud
NASA Astrophysics Data System (ADS)
Liu, Da; Wang, Li; Hao, Yuncai; Zhang, Jun
2015-10-01
In recent years, with the rapid development of the hardware and software of the three-dimensional model acquisition, three-dimensional laser scanning technology is utilized in various aspects, especially in space exploration. The point cloud filter is very important before using the data. In the paper, considering both the processing quality and computing speed, an improved mean-shift point cloud filter method is proposed. Firstly, by analyze the relevance of the normal vector between the upcoming processing point and the near points, the iterative neighborhood of the mean-shift is selected dynamically, then the high frequency noise is constrained. Secondly, considering the normal vector of the processing point, the normal vector is updated. Finally, updated position is calculated for each point, then each point is moved in the normal vector according to the updated position. The experimental results show that the large features are retained, at the same time, the small sharp features are also existed for different size and shape of objects, so the target feature information is protected precisely. The computational complexity of the proposed method is not high, it can bring high precision results with fast speed, so it is very suitable for space application. It can also be utilized in civil, such as large object measurement, industrial measurement, car navigation etc. In the future, filter with the help of point strength will be further exploited.
Numerical 3D models support two distinct hydrothermal circulation systems at fast spreading ridges
NASA Astrophysics Data System (ADS)
Hasenclever, Jörg; Theissen-Krah, Sonja; Rüpke, Lars
2013-04-01
We present 3D numerical calculations of hydrothermal fluid flow at fast spreading ridges. The setup of the 3D models is based our previous 2D studies, in which we have coupled numerical models for crustal accretion and hydrothermal fluid flow. One result of these calculations is a crustal permeability field that leads to a thermal structure in the crust that matches seismic tomography data of the East Pacific Rise (EPR). The 1000°C isotherm obtained from the 2D results is now used as the lower boundary of the 3D model domain, while the upper boundary is a smoothed bathymetry of the EPR. The same permeability field as in the 2D models is used, with the highest permeability at the ridge axis and a decrease with both depth and distance to the ridge. Permeability is also reduced linearly between 600 and 1000°C. Using a newly developed parallel finite element code written in Matlab that solves for thermal evolution, fluid pressure and Darcy flow, we simulate the flow patterns of hydrothermal circulation in a segment of 5000m along-axis, 10000m across-axis and up to 5000m depth. We observe two distinct hydrothermal circulation systems: An on-axis system forming a series of vents with a spacing ranging from 100 to 500m that is recharged by nearby (100-200m) downflows on both sides of the ridge axis. Simultaneously a second system with much broader extensions both laterally and vertically exists off-axis. It is recharged by fluids intruding between 1500m to 5000m off-axis and sampling both upper and lower crust. These fluids are channeled in the deepest and hottest regions with high permeability and migrate up-slope following the 600°C isotherm until reaching the edge of the melt lens. Depending on the width of the melt lens these off-axis fluids either merge with the on-axis hydrothermal system or form separate vents. We observe separate off-axis vent fields if the magma lens half-width exceeds 1000m and confluence of both systems for half-widths smaller than 500m. For
Point spread function engineering with multiphoton SPIFI
NASA Astrophysics Data System (ADS)
Wernsing, Keith A.; Field, Jeffrey J.; Domingue, Scott R.; Allende-Motz, Alyssa M.; DeLuca, Keith F.; Levi, Dean H.; DeLuca, Jennifer G.; Young, Michael D.; Squier, Jeff A.; Bartels, Randy A.
2016-03-01
MultiPhoton SPatIal Frequency modulated Imaging (MP-SPIFI) has recently demonstrated the ability to simultaneously obtain super-resolved images in both coherent and incoherent scattering processes -- namely, second harmonic generation and two-photon fluorescence, respectively.1 In our previous analysis, we considered image formation produced by the zero and first diffracted orders from the SPIFI modulator. However, the modulator is a binary amplitude mask, and therefore produces multiple diffracted orders. In this work, we extend our analysis to image formation in the presence of higher diffracted orders. We find that tuning the mask duty cycle offers a measure of control over the shape of super-resolved point spread functions in an MP-SPIFI microscope.
Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera
NASA Astrophysics Data System (ADS)
Kim, H.; Yoon, W.; Kim, T.
2016-06-01
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
Deep Herschel PACS point spread functions
NASA Astrophysics Data System (ADS)
Bocchio, M.; Bianchi, S.; Abergel, A.
2016-06-01
The knowledge of the point spread function (PSF) of imaging instruments represents a fundamental requirement for astronomical observations. The Herschel PACS PSFs delivered by the instrument control centre are obtained from observations of the Vesta asteroid, which provides a characterisation of the central part and, therefore, excludes fainter features. In many cases, however, information on both the core and wings of the PSFs is needed. With this aim, we combine Vesta and Mars dedicated observations and obtain PACS PSFs with an unprecedented dynamic range (~106) at slow and fast scan speeds for the three photometric bands. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.FITS files of our PACS PSFs (Fig. 2) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A117
NASA Astrophysics Data System (ADS)
Lee, Jin su; Lee, Mu young; Kim, Jun oh; Kim, Cheol joong; Won, Yong Hyub
2015-03-01
Generally, volumetric 3D display panel produce volume-filling three dimensional images. This paper discusses a volumetric 3D display based on periodical point light sources(PLSs) construction using a multi focal lens array(MFLA). The voxel of discrete 3D images is formed in the air via construction of point light source emitted by multi focal lens array. This system consists of a parallel beam, a spatial light modulator(SLM), a lens array, and a polarizing filter. The multi focal lens array is made with UV adhesive polymer droplet control using a dispersing machine. The MFLA consists of 20x20 circular lens array. Each lens aperture of the MFLA shows 300um on average. The polarizing filter is placed after the SLM and the MFLA to set in phase mostly mode. By the point spread function, the PLSs of the system are located by the focal length of each lens of the MFLA. It can also provide the moving parallax and relatively high resolution. However it has a limit of viewing angle and crosstalk by a property of each lens. In our experiment, we present the letter `C', `O', `DE' and ball's surface with the different depth location. It could be seen clearly that when CCD camera is moved to its position following as transverse axis of the display system. From our result, we expect that varifocal lens like EWOD and LC-lens can be applied for real time volumetric 3D display system.
A Multiscale Constraints Method Localization of 3D Facial Feature Points
Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin
2015-01-01
It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244
Bacteria Experiment May Point Way to Slow Zika's Spread
... html Bacteria Experiment May Point Way to Slow Zika's Spread Infecting mosquitoes led to lower, inactive levels ... bacteria may help curb the spread of the Zika virus. The researchers got the idea after a ...
Bacteria Experiment May Point Way to Slow Zika's Spread
... nlm.nih.gov/medlineplus/news/fullstory_158661.html Bacteria Experiment May Point Way to Slow Zika's Spread ... 2016 (HealthDay News) -- Experiments in mosquitoes suggest that bacteria may help curb the spread of the Zika ...
Fast Probabilistic Fusion of 3d Point Clouds via Occupancy Grids for Scene Classification
NASA Astrophysics Data System (ADS)
Kuhn, Andreas; Huang, Hai; Drauschke, Martin; Mayer, Helmut
2016-06-01
High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
Edge features extraction from 3D laser point cloud based on corresponding images
NASA Astrophysics Data System (ADS)
Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long
2013-09-01
An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.
3D campus modeling using LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya
2012-10-01
The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.
Human Body 3D Posture Estimation Using Significant Points and Two Cameras
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422
Nonrigid point registration for 2D curves and 3D surfaces and its various applications
NASA Astrophysics Data System (ADS)
Wang, Hesheng; Fei, Baowei
2013-06-01
A nonrigid B-spline-based point-matching (BPM) method is proposed to match dense surface points. The method solves both the point correspondence and nonrigid transformation without features extraction. The registration method integrates a motion model, which combines a global transformation and a B-spline-based local deformation, into a robust point-matching framework. The point correspondence and deformable transformation are estimated simultaneously by fuzzy correspondence and by a deterministic annealing technique. Prior information about global translation, rotation and scaling is incorporated into the optimization. A local B-spline motion model decreases the degrees of freedom for optimization and thus enables the registration of a larger number of feature points. The performance of the BPM method has been demonstrated and validated using synthesized 2D and 3D data, mouse MRI and micro-CT images. The proposed BPM method can be used to register feature point sets, 2D curves, 3D surfaces and various image data.
NASA Astrophysics Data System (ADS)
Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard
2010-04-01
The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.
Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision
NASA Astrophysics Data System (ADS)
Diskin, Yakov; Asari, Vijayan K.
2012-10-01
Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.
Non-Iterative Rigid 2D/3D Point-Set Registration Using Semidefinite Programming
NASA Astrophysics Data System (ADS)
Khoo, Yuehaw; Kapoor, Ankur
2016-07-01
We describe a convex programming framework for pose estimation in 2D/3D point-set registration with unknown point correspondences. We give two mixed-integer nonlinear program (MINP) formulations of the 2D/3D registration problem when there are multiple 2D images, and propose convex relaxations for both of the MINPs to semidefinite programs (SDP) that can be solved efficiently by interior point methods. Our approach to the 2D/3D registration problem is non-iterative in nature as we jointly solve for pose and correspondence. Furthermore, these convex programs can readily incorporate feature descriptors of points to enhance registration results. We prove that the convex programs exactly recover the solution to the original nonconvex 2D/3D registration problem under noiseless condition. We apply these formulations to the registration of 3D models of coronary vessels to their 2D projections obtained from multiple intra-operative fluoroscopic images. For this application, we experimentally corroborate the exact recovery property in the absence of noise and further demonstrate robustness of the convex programs in the presence of noise.
Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods
NASA Astrophysics Data System (ADS)
Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.
2015-03-01
The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.
High-numerical-aperture microscopy with a rotating point spread function.
Yu, Zhixian; Prasad, Sudhakar
2016-07-01
Rotating point spread function (PSF) microscopy via spiral phase engineering can localize point sources over large focal depths in a snapshot mode. The present work gives an approximate vector-field analysis of an improved rotating PSF design that encodes both the 3D location and polarization state of a monochromatic point dipole emitter for high-numerical-aperture microscopy. By examining the angle of rotation and the spatial form of the PSF, one can jointly localize point sources and determine the polarization state of light emitted by them over a 3D field in a single snapshot. Results of numerical simulations of noisy data frames under Poisson shot noise conditions and the errors in the recovery of 3D location and dipole orientation for a single point source are discussed. PMID:27409707
3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models
NASA Astrophysics Data System (ADS)
Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.
2013-07-01
Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.
Database guided detection of anatomical landmark points in 3D images of the heart
NASA Astrophysics Data System (ADS)
Karavides, Thomas; Esther Leung, K. Y.; Paclik, Pavel; Hendriks, Emile A.; Bosch, Johan G.
2010-03-01
Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D) echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The method can detect landmarks accurately in the four-chamber (apex: 7.9+/-7.1mm, septal mitral valve point: 5.6+/-2.7mm lateral mitral valve point: 4.0+/-2.6mm) and two-chamber view (apex: 7.1+/-6.7mm, anterior mitral valve point: 5.8+/-3.5mm, inferior mitral valve point: 4.5+/-3.1mm). The results compare well to those reported by others.
Melting points and chemical bonding properties of 3d transition metal elements
NASA Astrophysics Data System (ADS)
Takahara, Wataru
2014-08-01
The melting points of 3d transition metal elements show an unusual local minimal peak at manganese across Period 4 in the periodic table. The chemical bonding properties of scandium, titanium, vanadium, chromium, manganese, iron, cobalt, nickel and copper are investigated by the DV-Xα cluster method. The melting points are found to correlate with the bond overlap populations. The chemical bonding nature therefore appears to be the primary factor governing the melting points.
3-D Printers Spread from Engineering Departments to Designs across Disciplines
ERIC Educational Resources Information Center
Chen, Angela
2012-01-01
The ability to print a 3-D object may sound like science fiction, but it has been around in some form since the 1980s. Also called rapid prototyping or additive manufacturing, the idea is to take a design from a computer file and forge it into an object, often in flat cross-sections that can be assembled into a larger whole. While the printer on…
Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation
NASA Astrophysics Data System (ADS)
Rhee, S.; Kim, T.
2016-06-01
3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.
Laser point cloud diluting and refined 3D reconstruction fusing with digital images
NASA Astrophysics Data System (ADS)
Liu, Jie; Zhang, Jianqing
2007-06-01
This paper shows a method to combine the imaged-based modeling technique and Laser scanning data to rebuild a realistic 3D model. Firstly use the image pair to build a relative 3D model of the object, and then register the relative model to the Laser coordinate system. Project the Laser points to one of the images and extract the feature lines from that image. After that fit the 2D projected Laser points to lines in the image and constrain their corresponding 3D points to lines in the 3D Laser space to keep the features of the model. Build TIN and cancel the redundant points, which don't impact the curvature of their neighborhood areas. Use the diluting Laser point cloud to reconstruct the geometry model of the object, and then project the texture of corresponding image onto it. The process is shown to be feasible and progressive proved by experimental results. The final model is quite similar with the real object. This method cuts down the quantity of data in the precondition of keeping the features of model. The effect of it is manifest.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a
Graph-Based Compression of Dynamic 3D Point Cloud Sequences.
Thanou, Dorina; Chou, Philip A; Frossard, Pascal
2016-04-01
This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486
Dense point-cloud creation using superresolution for a monocular 3D reconstruction system
NASA Astrophysics Data System (ADS)
Diskin, Yakov; Asari, Vijayan K.
2012-05-01
We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.
NASA Astrophysics Data System (ADS)
Dahlke, D.; Linkiewicz, M.
2016-06-01
This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.
Feature relevance assessment for the semantic interpretation of 3D point cloud data
NASA Astrophysics Data System (ADS)
Weinmann, M.; Jutzi, B.; Mallet, C.
2013-10-01
The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority
Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.
2016-06-01
Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar
Interactive Cosmetic Makeup of a 3D Point-Based Face Model
NASA Astrophysics Data System (ADS)
Kim, Jeong-Sik; Choi, Soo-Mi
We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.
3D multiple-point statistics simulation using 2D training images
NASA Astrophysics Data System (ADS)
Comunian, A.; Renard, P.; Straubhaar, J.
2012-03-01
One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.
Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors
Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko
2012-01-01
Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513
Phase-Scrambler Plate Spreads Point Image
NASA Technical Reports Server (NTRS)
Edwards, Oliver J.; Arild, Tor
1992-01-01
Array of small prisms retrofit to imaging lens. Phase-scrambler plate essentially planar array of small prisms partitioning aperture of lens into many subapertures, and prism at each subaperture designed to divert relatively large diffraction spot formed by that subaperture to different, specific point on focal plane.
Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge
NASA Astrophysics Data System (ADS)
Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas
2013-05-01
Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical
Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data
NASA Astrophysics Data System (ADS)
Ni, Nina; Chen, Ninghua; Chen, Jianyu
2014-09-01
High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.
3D Point Correspondence by Minimum Description Length in Feature Space.
Chen, Jiun-Hung; Zheng, Ke Colin; Shapiro, Linda G
2010-01-01
Finding point correspondences plays an important role in automatically building statistical shape models from a training set of 3D surfaces. For the point correspondence problem, Davies et al. [1] proposed a minimum-description-length-based objective function to balance the training errors and generalization ability. A recent evaluation study [2] that compares several well-known 3D point correspondence methods for modeling purposes shows that the MDL-based approach [1] is the best method. We adapt the MDL-based objective function for a feature space that can exploit nonlinear properties in point correspondences, and propose an efficient optimization method to minimize the objective function directly in the feature space, given that the inner product of any vector pair can be computed in the feature space. We further employ a Mercer kernel [3] to define the feature space implicitly. A key aspect of our proposed framework is the generalization of the MDL-based objective function to kernel principal component analysis (KPCA) [4] spaces and the design of a gradient-descent approach to minimize such an objective function. We compare the generalized MDL objective function on KPCA spaces with the original one and evaluate their abilities in terms of reconstruction errors and specificity. From our experimental results on different sets of 3D shapes of human body organs, the proposed method performs significantly better than the original method. PMID:25328917
Octree-Based SIMD Strategy for Icp Registration and Alignment of 3d Point Clouds
NASA Astrophysics Data System (ADS)
Eggert, D.; Dalyot, S.
2012-07-01
Matching and fusion of 3D point clouds, such as close range laser scans, is important for creating an integrated 3D model data infrastructure. The Iterative Closest Point algorithm for alignment of point clouds is one of the most commonly used algorithms for matching of rigid bodies. Evidently, scans are acquired from different positions and might present different data characterization and accuracies, forcing complex data-handling issues. The growing demand for near real-time applications also introduces new computational requirements and constraints into such processes. This research proposes a methodology to solving the computational and processing complexities in the ICP algorithm by introducing specific performance enhancements to enable more efficient analysis and processing. An Octree data structure together with the caching of localized Delaunay triangulation-based surface meshes is implemented to increase computation efficiency and handling of data. Parallelization of the ICP process is carried out by using the Single Instruction, Multiple Data processing scheme - based on the Divide and Conquer multi-branched paradigm - enabling multiple processing elements to be performed on the same operation on multiple data independently and simultaneously. When compared to the traditional non-parallel list processing the Octree-based SIMD strategy showed a sharp increase in computation performance and efficiency, together with a reliable and accurate alignment of large 3D point clouds, contributing to a qualitative and efficient application.
Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data
NASA Astrophysics Data System (ADS)
Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert
2014-05-01
A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal
Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a
A formal classification of 3D medial axis points and their local geometry.
Giblin, Peter; Kimia, Benjamin B
2004-02-01
This paper proposes a novel hypergraph skeletal representation for 3D shape based on a formal derivation of the generic structure of its medial axis. By classifying each skeletal point by its order of contact, we show that, generically, the medial axis consists of five types of points, which are then organized into sheets, curves, and points: 1) sheets (manifolds with boundary) which are the locus of bitangent spheres with regular tangency A1(2) (Ak(n) notation means n distinct k-fold tangencies of the sphere of contact, as explained in the text); two types of curves, 2) the intersection curve of three sheets and the locus of centers of tritangent spheres, A1(3), and 3) the boundary of sheets, which are the locus of centers of spheres whose radius equals the larger principal curvature, i.e., higher order contact A3 points; and two types of points, 4) centers of quad-tangent spheres, A1(4), and 5) centers of spheres with one regular tangency and one higher order tangency, A1A3. The geometry of the 3D medial axis thus consists of sheets (A1(2)) bounded by one type of curve (A3) on their free end, which corresponds to ridges on the surface, and attached to two other sheets at another type of curve (A1(3)), which support a generalized cylinder description. The A3 curves can only end in A1A3 points where they must meet an A1(3) curve. The A1(3) curves meet together in fours at an A1(4) point. This formal result leads to a compact representation for 3D shape, referred to as the medial axis hypergraph representation consisting of nodes (A1(4) and A1A3 points), links between pairs of nodes (A1(3) and A3 curves) and hyperlinks between groups of links (A1(2) sheets). The description of the local geometry at nodes by itself is sufficient to capture qualitative aspects of shapes, in analogy to 2D. We derive a pointwise reconstruction formula to reconstruct a surface from this medial axis hypergraph together with the radius function. Thus, this information completely
Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System
NASA Astrophysics Data System (ADS)
Aoki, K.; Yamamoto, K.; Shimamura, H.
2012-07-01
This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.
Reconstructing 3D coastal cliffs from airborne oblique photographs without ground control points
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.
2014-05-01
Coastal cliff collapse hazard assessment requires measuring cliff face topography at regular intervals. Terrestrial laser scanner techniques have proven useful so far but are expensive to use either through purchasing the equipment or through survey subcontracting. In addition, terrestrial laser surveys take time which is sometimes incompatible with the time during with the beach is accessible at low-tide. By comparison, structure from motion techniques (SFM) are much less costly to implement, and if airborne, acquisition of several kilometers of coastline can be done in a matter of minutes. In this paper, the potential of GPS-tagged oblique airborne photographs and SFM techniques is examined to reconstruct chalk cliff dense 3D point clouds without Ground Control Points (GCP). The focus is put on comparing the relative 3D point of views reconstructed by Visual SFM with their synchronous Solmeta Geotagger Pro2 GPS locations using robust estimators. With a set of 568 oblique photos, shot from the open door of an airplane with a triplet of synchronized Nikon D7000, GPS and SFM-determined view point coordinates converge to X: ±31.5 m; Y: ±39.7 m; Z: ±13.0 m (LE66). Uncertainty in GPS position affects the model scale, angular attitude of the reference frame (the shoreline ends up tilted by 2°) and absolute positioning. Ground Control Points cannot be avoided to orient such models.
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Accorsi, Roberto
2005-10-01
Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging. PMID:16231793
NASA Astrophysics Data System (ADS)
Accorsi, Roberto
2005-10-01
Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging.
Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns
NASA Astrophysics Data System (ADS)
Dong, Pinliang
2009-10-01
Spatial scale plays an important role in many fields. As a scale-dependent measure for spatial heterogeneity, lacunarity describes the distribution of gaps within a set at multiple scales. In Earth science, environmental science, and ecology, lacunarity has been increasingly used for multiscale modeling of spatial patterns. This paper presents the development and implementation of a geographic information system (GIS) software extension for lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns. Depending on the application requirement, lacunarity analysis can be performed in two modes: global mode or local mode. The extension works for: (1) binary (1-bit) and grey-scale datasets in any raster format supported by ArcGIS and (2) 1D, 2D, and 3D point datasets as shapefiles or geodatabase feature classes. For more effective measurement of lacunarity for different patterns or processes in raster datasets, the extension allows users to define an area of interest (AOI) in four different ways, including using a polygon in an existing feature layer. Additionally, directionality can be taken into account when grey-scale datasets are used for local lacunarity analysis. The methodology and graphical user interface (GUI) are described. The application of the extension is demonstrated using both simulated and real datasets, including Brodatz texture images, a Spaceborne Imaging Radar (SIR-C) image, simulated 1D points on a drainage network, and 3D random and clustered point patterns. The options of lacunarity analysis and the effects of polyline arrangement on lacunarity of 1D points are also discussed. Results from sample data suggest that the lacunarity analysis extension can be used for efficient modeling of spatial patterns at multiple scales.
Biview learning for human posture segmentation from 3D points cloud.
Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng
2014-01-01
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. PMID:24465721
Detectability limitations with 3-D point reconstruction algorithms using digital radiography
Lindgren, Erik
2015-03-31
The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim
NASA Astrophysics Data System (ADS)
Becker, S.; Peter, M.; Fritsch, D.
2015-03-01
The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.
Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors
Ge, Song; Fan, Guoliang
2015-01-01
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673
An investigation of pointing postures in a 3D stereoscopic environment.
Lin, Chiuhsiang Joe; Ho, Sui-Hua; Chen, Yan-Jyun
2015-05-01
Many object pointing and selecting techniques for large screens have been proposed in the literature. There is a lack of quantitative evidence suggesting proper pointing postures for interacting with stereoscopic targets in immersive virtual environments. The objective of this study was to explore users' performances and experiences of using different postures while interacting with 3D targets remotely in an immersive stereoscopic environment. Two postures, hand-directed and gaze-directed pointing methods, were compared in order to investigate the postural influences. Two stereo parallaxes, negative and positive parallaxes, were compared for exploring how target depth variances would impact users' performances and experiences. Fifteen participants were recruited to perform two interactive tasks, tapping and tracking tasks, to simulate interaction behaviors in the stereoscopic environment. Hand-directed pointing is suggested for both tapping and tracking tasks due to its significantly better overall performance, less muscle fatigue, and better usability. However, a gaze-directed posture is probably a better alternative than hand-directed pointing for tasks with high accuracy requirements in home-in phases. Additionally, it is easier for users to interact with targets with negative parallax than with targets with positive parallax. Based on the findings of this research, future applications involving different pointing techniques should consider both pointing performances and postural effects as a result of pointing task precision requirements and potential postural fatigue. PMID:25683543
Points based reconstruction and rendering of 3D shapes from large volume dataset
NASA Astrophysics Data System (ADS)
Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming
2003-05-01
In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.
Bender, Andreas; Mussa, Hamse Y; Gill, Gurprem S; Glen, Robert C
2004-12-16
A novel method (MOLPRINT 3D) for virtual screening and the elucidation of ligand-receptor binding patterns is introduced that is based on environments of molecular surface points. The descriptor uses points relative to the molecular coordinates, thus it is translationally and rotationally invariant. Due to its local nature, conformational variations cause only minor changes in the descriptor. If surface point environments are combined with the Tanimoto coefficient and applied to virtual screening, they achieve retrieval rates comparable to that of two-dimensional (2D) fingerprints. The identification of active structures with minimal 2D similarity ("scaffold hopping") is facilitated. In combination with information-gain-based feature selection and a naive Bayesian classifier, information from multiple molecules can be combined and classification performance can be improved. Selected features are consistent with experimentally determined binding patterns. Examples are given for angiotensin-converting enzyme inhibitors, 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors, and thromboxane A2 antagonists. PMID:15588092
NASA Astrophysics Data System (ADS)
Koptev, Alexander; Burov, Evgueni; Gerya, Taras
2014-05-01
We conducted high-resolution 3D thermo-mechanical numerical modeling experiments to explore evolution and styles of plume-activated rifting in presence of preexisting far-field tectonic stress/strain field and tectonic heritage (in form of cratonic blocks embedded in «normal lithosphere»). The experiments demonstrate strong dependence of rifting style on preexisting far-field tectonic stress/strain field and initial thermo-rheological profile, as well as on the tectonic heritage. The models with homogeneous lithosphere demonstrate strongly non-linear impact of far-field extension rates on timing of break-up processes. Experiments with relatively fast far-field extension (6 mm/y) show intensive normal fault localization in crust and uppermost mantle above the zones of plume-head emplacement some 15-20 Myrs after the onset of the experiment. When plume head material reaches the bottom of the continental crust (at ~25 Myrs), the latter is rapidly ruptured (<1 Myrs) and several steady oceanic floor spreading centers develop. Slower (3 mm/y) far-field velocities result in disproportionally longer break-up time (from 60 to 70 Myrs depending on initial isoterm at the crust bottom). Although in all experiments with homogeneous lithosphere spreading centers have similar orientation perpendicular to the direction of far-field extension, their number and spatial location are different for different extension rates and thermo-rheological structures of the lithosphere. On the contrary, in case of normal lithosphere containing embedded cratonic block, spreading zones develop symmetrically, embracing cratonic micro-plate along its long sides. Presence of cratonic blocks leads to splitting of the plume head onto initially nearly symmetrical parts, each of which flows towards beneath the craton borders. This craton-controlled distribution of plume material causes the crustal strain localization and uprise of plume material along the craton boundaries. Though there is a net
PointCloudExplore 2: Visual exploration of 3D gene expression
International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd
2008-03-31
To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and
PointCloudXplore: a visualization tool for 3D gene expressiondata
Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd
2006-10-01
The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.
Existence of two MHD reconnection modes in a solar 3D magnetic null point topology
NASA Astrophysics Data System (ADS)
Pariat, Etienne; Antiochos, Spiro; DeVore, C. Richard; Dalmasse, Kévin
2012-07-01
Magnetic topologies with a 3D magnetic null point are common in the solar atmosphere and occur at different spatial scales: such structures can be associated with some solar eruptions, with the so-called pseudo-streamers, and with numerous coronal jets. We have recently developed a series of numerical experiments that model magnetic reconnection in such configurations in order to study and explain the properties of jet-like features. Our model uses our state-of-the-art adaptive-mesh MHD solver ARMS. Energy is injected in the system by line-tied motion of the magnetic field lines in a corona-like configuration. We observe that, in the MHD framework, two reconnection modes eventually appear in the course of the evolution of the system. A very impulsive one, associated with a highly dynamic and fully 3D current sheet, is associated with the energetic generation of a jet. Before and after the generation of the jet, a quasi-steady reconnection mode, more similar to the standard 2D Sweet-Parker model, presents a lower global reconnection rate. We show that the geometry of the magnetic configuration influences the trigger of one or the other mode. We argue that this result carries important implications for the observed link between observational features such as solar jets, solar plumes, and the emission of coronal bright points.
3D modeling of building indoor spaces and closed doors from imagery and point clouds.
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-01-01
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723
3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds
Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro
2015-01-01
3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723
A method of 3D object recognition and localization in a cloud of points
NASA Astrophysics Data System (ADS)
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.
2016-06-01
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2.
Aggarwal, Leena; Gaurav, Abhishek; Thakur, Gohil S; Haque, Zeba; Ganguli, Ashok K; Sheet, Goutam
2016-01-01
Three-dimensional (3D) Dirac semimetals exist close to topological phase boundaries which, in principle, should make it possible to drive them into exotic new phases, such as topological superconductivity, by breaking certain symmetries. A practical realization of this idea has, however, hitherto been lacking. Here we show that the mesoscopic point contacts between pure silver (Ag) and the 3D Dirac semimetal Cd3As2 (ref. ) exhibit unconventional superconductivity with a critical temperature (onset) greater than 6 K whereas neither Cd3As2 nor Ag are superconductors. A gap amplitude of 6.5 meV is measured spectroscopically in this phase that varies weakly with temperature and survives up to a remarkably high temperature of 13 K, indicating the presence of a robust normal-state pseudogap. The observations indicate the emergence of a new unconventional superconducting phase that exists in a quantum mechanically confined region under a point contact between a Dirac semimetal and a normal metal. PMID:26524131
NASA Astrophysics Data System (ADS)
Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.
2011-03-01
The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.
Inter-point procrustes: identifying regional and large differences in 3D anatomical shapes.
Lekadir, Karim; Frangi, Alejandro F; Yang, Guang-Zhong
2012-01-01
This paper presents a new approach for the robust alignment and interpretation of 3D anatomical structures with large and localized shape differences. In such situations, existing techniques based on the well-known Procrustes analysis can be significantly affected due to the introduced non-Gaussian distribution of the residuals. In the proposed technique, influential points that induce large dissimilarities are identified and displaced with the aim to obtain an intermediate template with an improved distribution of the residuals. The key element of the algorithm is the use of pose invariant shape variables to robustly guide both the influential point detection and displacement steps. The intermediate template is then used as the basis for the estimation of the final pose parameters between the source and destination shapes, enabling to effectively highlight the regional differences of interest. The validation using synthetic and real datasets of different morphologies demonstrates robustness up-to 50% regional differences and potential for shape classification. PMID:23286119
3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups
ERIC Educational Resources Information Center
Scalfani, Vincent F.; Vaid, Thomas P.
2014-01-01
Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…
A multi-resolution fractal additive scheme for blind watermarking of 3D point data
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Wilder, Kathy; Fox, Kevin
2013-05-01
We present a fractal feature space for 3D point watermarking to make geospatial systems more secure. By exploiting the self similar nature of fractals, hidden information can be spatially embedded in point cloud data in an acceptable manner as described within this paper. Our method utilizes a blind scheme which provides automatic retrieval of the watermark payload without the need of the original cover data. Our method for locating similar patterns and encoding information in LiDAR point cloud data is accomplished through a look-up table or code book. The watermark is then merged into the point cloud data itself resulting in low distortion effects. With current advancements in computing technologies, such as GPGPUs, fractal processing is now applicable for processing of big data which is present in geospatial as well as other systems. This watermarking technique described within this paper can be important for systems where point data is handled by numerous aerial collectors including analysts use for systems such as a National LiDAR Data Layer.
Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques
Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li, Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva
2011-01-01
Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE®). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to ∼13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE® system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by ∼9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT∼18% and ∼42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE® and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%–4%). PDD values at 2 cm depth varied from ∼72% for the 40 mm field, down to ∼55% for the 1 mm field. EBT and PRESAGE® PDDs agreed within ∼3% in the typical therapy region (1–4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm). These results indicate good overall consistency between ion-chamber, EBT
Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques
Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva
2011-12-15
Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE registered ). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to {approx}13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE registered system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by {approx}9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT{approx}18% and {approx}42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE registered and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%-4%). PDD values at 2 cm depth varied from {approx}72% for the 40 mm field, down to {approx}55% for the 1 mm field. EBT and PRESAGE registered PDDs agreed within {approx}3% in the typical therapy region (1-4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm
Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software
NASA Astrophysics Data System (ADS)
Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander
2016-06-01
Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.
Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features
Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie
2014-01-01
Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694
Recognizing objects in 3D point clouds with multi-scale local features.
Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie
2014-01-01
Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694
The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations
NASA Astrophysics Data System (ADS)
Ben Hassen, M. F.; Erhard, K.; Potthast, R.
2006-02-01
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Shaohui
Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected
Processing 3D flash LADAR point-clouds in real-time for flight applications
NASA Astrophysics Data System (ADS)
Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.
2007-04-01
Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.
NASA Technical Reports Server (NTRS)
Folta, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.
Nouri, Mahtab; Farzan, Arash; Baghban, Ali Reza Akbarzadeh; Massudi, Reza
2015-01-01
OBJECTIVE: The aim of the present study was to assess the diagnostic value of a laser scanner developed to determine the coordinates of clinical bracket points and to compare with the results of a coordinate measuring machine (CMM). METHODS: This diagnostic experimental study was conducted on maxillary and mandibular orthodontic study casts of 18 adults with normal Class I occlusion. First, the coordinates of the bracket points were measured on all casts by a CMM. Then, the three-dimensional coordinates (X, Y, Z) of the bracket points were measured on the same casts by a 3D laser scanner designed at Shahid Beheshti University, Tehran, Iran. The validity and reliability of each system were assessed by means of intraclass correlation coefficient (ICC) and Dahlberg's formula. RESULTS: The difference between the mean dimension and the actual value for the CMM was 0.0066 mm. (95% CI: 69.98340, 69.99140). The mean difference for the laser scanner was 0.107 ± 0.133 mm (95% CI: -0.002, 0.24). In each method, differences were not significant. The ICC comparing the two methods was 0.998 for the X coordinate, and 0.996 for the Y coordinate; the mean difference for coordinates recorded in the entire arch and for each tooth was 0.616 mm. CONCLUSION: The accuracy of clinical bracket point coordinates measured by the laser scanner was equal to that of CMM. The mean difference in measurements was within the range of operator errors. PMID:25741826
Knowledge guided object detection and identification in 3D point clouds
NASA Astrophysics Data System (ADS)
Karmacharya, A.; Boochs, F.; Tietz, B.
2015-05-01
Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.
Aberration averaging using point spread function for scanning projection systems
NASA Astrophysics Data System (ADS)
Ooki, Hiroshi; Noda, Tomoya; Matsumoto, Koichi
2000-07-01
Scanning projection system plays a leading part in current DUV optical lithography. It is frequently pointed out that the mechanically induced distortion and field curvature degrade image quality after scanning. On the other hand, the aberration of the projection lens is averaged along the scanning direction. This averaging effect reduces the residual aberration significantly. The aberration averaging based on the point spread function and phase retrieval technique in order to estimate the effective wavefront aberration after scanning is described in this paper. Our averaging method is tested using specified wavefront aberration, and its accuracy is discussed based on the measured wavefront aberration of recent Nikon projection lens.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-01-01
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221
Calibration of an outdoor distributed camera network with a 3D point cloud.
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-01-01
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-11-15
achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
Structural analysis of San Leo (RN, Italy) east and north cliffs using 3D point clouds
NASA Astrophysics Data System (ADS)
Spreafico, Margherita Cecilia; Bacenetti, Marco; Borgatti, Lisa; Cignetti, Martina; Giardino, Marco; Perotti, Luigi
2013-04-01
The town of San Leo, like many others in the historical region of Montefeltro (Northern Apennines, Italy), was built in medieval period on a calcarenite and sandstone slab, bordered by subvertical and overhanging cliffs up to 100 m high, for defense purposes. The slab and the underlying clayey substratum show widespread landslide phenomena: the first is tectonized and crossed by joints and faults, and it is affected by lateral spreading with associated rock falls, topples and tilting. Moreover, the underlying clayey substratum is involved in plastic movements, like earth flows and slides. The main cause of instability in the area, which brings about these movements, is the high deformability contrast between the plate and the underlying clays. The aim of our research is to set up a numerical model that can well describe the processes and take into account the different factors that influence the evolution of the movements. One of these factors is certainly the structural setting of the slab, characterized by several joints and faults; in order to better identify and detect the main joint sets affecting the study area a structural analysis was performed. Up to date, a series of scans of San Leo cliff taken in 2008 and 2011, with a Riegl Z420i were analyzed. Initially, we chose a test area, located in the east side of the cliff, in which analyses were performed using two different softwares: COLTOP 3D and Polyworks. We repeated the analysis using COLTOP for all the east wall and for a part of the north wall, including an area affected by a rock fall in 2006. In the test area we identified five sets with different dips and dip directions. The analysis of the east and north walls permitted to identify eight sets (seven plus the bedding) of discontinuities. We compared these results with previous ones from surveys taken by others authors in some areas and with some preliminary data from a traditional geological survey of the whole area. With traditional methods only a
NASA Astrophysics Data System (ADS)
Hu, Bin; Kieweg, Sarah
2010-11-01
Gravity-driven thin film flow down an incline is studied for optimal design of polymeric drug delivery vehicles, such as anti-HIV topical microbicides. We develop a 3D FEM model using non-Newtonian mechanics to model the flow of gels in response to gravity, surface tension and shear-thinning. Constant volume setup is applied within the lubrication approximation scope. The lengthwise profiles of the 3D model agree with our previous 2D finite difference model, while the transverse contact line patterns of the 3D model are compared to the experiments. With incorporation of surface tension, capillary ridges are observed at the leading front in both 2D and 3D models. Previously published studies show that capillary ridge can amplify the fingering instabilities in transverse direction. Sensitivity studies (2D & 3D) and experiments are carried out to describe the influence of surface tension and shear-thinning on capillary ridge and fingering instabilities.
Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)
NASA Astrophysics Data System (ADS)
Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane
2016-04-01
Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information
Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals.
Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiong-Jun; Xie, X C; Wei, Jian; Wang, Jian
2016-01-01
Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material. PMID:26524129
Surface-based matching of 3D point clouds with variable coordinates in source and target system
NASA Astrophysics Data System (ADS)
Ge, Xuming; Wunderlich, Thomas
2016-01-01
The automatic co-registration of point clouds, representing three-dimensional (3D) surfaces, is an important technique in 3D reconstruction and is widely applied in many different disciplines. An alternative approach is proposed here that estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface. The approach uses the nonlinear Gauss-Helmert model, minimizing the quadratically constrained least squares problem. This approach has the ability to match arbitrarily oriented 3D surfaces captured from a number of different sensors, on different time-scales and at different resolutions. In addition to the 3D surface-matching paths, the mathematical model allows the precision of the point clouds to be assessed after adjustment. The error behavior of surfaces can also be investigated based on the proposed approach. Some practical examples are presented and the results are compared with the iterative closest point and the linear least-squares approaches to demonstrate the performance and benefits of the proposed technique.
Extraction and refinement of building faces in 3D point clouds
NASA Astrophysics Data System (ADS)
Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri
2013-10-01
In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.
The Finite-Length Line-Spread Function: An Extension To Asymmetric Point Spread Functions
NASA Astrophysics Data System (ADS)
Dallas, W. J.
1988-06-01
The point spread function (PSF) is used to characterize imaging systems. The PSF is usually not measured directly but rather the line spread function (LSF) is measured by scanning across the image of an input slit. One of the well known LSF-PSF conversion formulas is then applied.1 These formulas make the assumption that the length of the input-slit image is great compared to the PSF extent. This assumption is unfortunately unwarranted for one of the most important medical imaging devices: the x-ray image intensifier. The large extent image intensifier's PSF and the limited size of the intensifier's isoplanatic patches combine to make consideration of the finite length of the input slit important. For-mulas for calculating the PSF from a measurement of the finite-length line spread function (FLSF) have been developed for the case of a rotationally symmetric PSF.3 In this presentation we generalize the conversion formulas to cover non-symmetric PSF's.
Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael
2012-01-01
Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521
Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models
Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon
2013-01-01
Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Fang, Lina; Li, Jonathan
2013-05-01
Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.
Point Spread Function extraction in crowded fields using blind deconvolution
NASA Astrophysics Data System (ADS)
Schreiber, Laura; La Camera, Andrea; Prato, Marco; Diolaiti, Emiliano; constraint on the PSFS which is an upper bound derived from the Strehl ratio (SR), to be provided together with the input data. In this contribution we show the photometric error dependence on the crowding, having simulated images generated with synthetic PSFs available from the Phase-A study of the E-ELT MCAO system (MAORY) and different crowding conditions.
2013-12-01
The extraction of the Point Spread Function (PSF) from astronomical %data is an important issue for data reduction packages for stellar %photometry that use PSF fitting. High resolution Adaptive Optics %images are characterized by a highly structured PSF that cannot be %represented by any simple analytical model. Even a numerical PSF %extracted from the frame can be affected by the field crowding %effects. In this paper we use blind deconvolution in order to find an %approximation of both the unknown object and the unknown PSF.In %particular we adopt an iterative inexact alternating minimization %method where each iteration (that we called outer iteration) consists %in alternating an update of the object and of the PSF by means of %fixed numbers of (inner) iterations of the Scaled Gradient Projection %(SGP) method. The use of SGP allows the introduction of different %constraints on the object and on the PSF. In particular, we introduce
The point spread function reconstruction by using Moffatlets — I
NASA Astrophysics Data System (ADS)
Li, Bai-Shun; Li, Guo-Liang; Cheng, Jun; Peterson, John; Cui, Wei
2016-09-01
Shear measurement is a crucial task in current and future weak lensing survey projects. The reconstruction of the point spread function (PSF) is one of the essential steps involved in this process. In this work, we present three different methods, Gaussianlets, Moffatlets and Expectation Maximization Principal Component Analysis (EMPCA), and quantify their efficiency on PSF reconstruction using four sets of simulated Large Synoptic Survey Telescope (LSST) star images. Gaussianlets and Moffatlets are two different sets of basis functions whose profiles are based on Gaussian and Moffat functions respectively. EMPCA is a statistical method performing an iterative procedure to find the principal components (PCs) of an ensemble of star images. Our tests show that: (1) Moffatlets always perform better than Gaussianlets. (2) EMPCA is more compact and flexible, but the noise existing in the PCs will contaminate the size and ellipticity of PSF. By contrast, Moffatlets keep the size and ellipticity very well.
Status of point spread function determination for Keck adaptive optics
NASA Astrophysics Data System (ADS)
Ragland, S.; Jolissaint, L.; Wizinowich, P.; Neyman, C.
2014-07-01
There is great interest in the adaptive optics (AO) science community to overcome the limitations imposed by incomplete knowledge of the point spread function (PSF). To address this limitation a program has been initiated at the W. M. Keck Observatory (WMKO) to demonstrate PSF determination for observations obtained with Keck AO science instruments. This paper aims to give a broad view of the progress achieved in this area. The concept and the implementation are briefly described. The results from on-sky on-axis NGS AO measurements using the NIRC2 science instrument are presented. On-sky performance of the technique is illustrated by comparing the reconstructed PSFs to NIRC2 PSFs. Accuracy of the reconstructed PSFs in terms of Strehl ratio and FWHM are discussed. Science cases for the first phase of science verification have been identified. More technical details of the program are presented elsewhere in the conference.
A Point Spread Function for the EPOXI Mission
NASA Technical Reports Server (NTRS)
Barry, Richard K.
2010-01-01
The Extrasolar Planet Observation Characterization and the Deep Impact Extended Investigation missions (EPOXI) are currently observing the transits of exoplanets, two comet nuclei at short range, and the Earth and Mars using the High Resolution Instrument (HRI) - a 0.3 m f/35 telescope on the Deep Impact probe. The HRI is in a permanently defocused state with the instrument pOint of focus about 0.6 cm before the focal plane due to the use of a reference flat mirror that took a power during ground thermal-vacuum testing. Consequently, the point spread function (PSF) covers approximately nine pixels FWHM and is characterized by a patch with three-fold symmetry due to the three-point support structures of the primary and secondary mirrors. The PSF is also strongly color dependent varying in shape and size with change in filtration and target color. While defocus is highly desirable for exoplanet transit observations to limit sensitivity to intra-pixel variation, it is suboptimal for observations of spatially resolved targets. Consequently, all images used in our analysis of such objects were deconvolved with an instrument PSF. The instrument PSF is also being used to optimize transit analysis. We discuss development and usage of an instrument PSF for these observations.
A 3D point-kernel multiple scatter model for parallel-beam SPECT based on a gamma-ray buildup factor
NASA Astrophysics Data System (ADS)
Marinkovic, Predrag; Ilic, Radovan; Spaic, Rajko
2007-09-01
A three-dimensional (3D) point-kernel multiple scatter model for point spread function (PSF) determination in parallel-beam single-photon emission computed tomography (SPECT), based on a dose gamma-ray buildup factor, is proposed. This model embraces nonuniform attenuation in a voxelized object of imaging (patient body) and multiple scattering that is treated as in the point-kernel integration gamma-ray shielding problems. First-order Compton scattering is done by means of the Klein-Nishina formula, but the multiple scattering is accounted for by making use of a dose buildup factor. An asset of the present model is the possibility of generating a complete two-dimensional (2D) PSF that can be used for 3D SPECT reconstruction by means of iterative algorithms. The proposed model is convenient in those situations where more exact techniques are not economical. For the proposed model's testing purpose calculations (for the point source in a nonuniform scattering object for parallel beam collimator geometry), the multiple-order scatter PSF generated by means of the proposed model matched well with those using Monte Carlo (MC) simulations. Discrepancies are observed only at the exponential tails mostly due to the high statistic uncertainty of MC simulations in this area, but not because of the inappropriateness of the model.
NASA Astrophysics Data System (ADS)
Brodu, N.; Lague, D.
2012-04-01
3D point clouds derived from Terrestrial laser scanner (TLS) and photogrammetry are now frequently used in geomorphology to achieve greater precision and completeness in surveying natural environments than what was feasible a few years ago. Yet, scientific exploitation of these large and complex 3D data sets remains difficult and would benefit from automated classification procedures that could pre-process the raw point cloud data. Typical examples of applications are the separation of vegetation from ground or cliff outcrops, the distinction between fresh rock surfaces and rockfall, the classification of flat or rippled bed, and more generally the classification of 3D surfaces according to their morphology directly in the native point cloud data organization rather than after a sometime cumbersome meshing or gridding phase. Yet developing such classification procedures remains difficult because of the 3D nature of the data generated from ground based systems (as opposed to the 2.5D nature of aerial lidar data) and the heterogeneity and complexity of natural surfaces. We present a new software suite (CANUPO) that can classify raw point clouds in 3D based on a new geometrical measure: the multi-scale dimensionality. This method exploits the multi-resolution characteristics high-resolution datasets covering scales ranging from a few centimeters to hundred of meters. The dimensionality characterizes the local 3D organization of the point cloud within spheres centered on the measured points and varies from being 1D (points set along a line), 2D (points forming a plane) to the full 3D volume. By varying the diameter of the sphere, we track how the local cloud geometry behaves across scales (typically ranging from 5 cm to 1 m). We present the technique and illustrate its efficiency on two examples : separating riparian vegetation from ground, and classifying a steep mountain stream as vegetation, rock, gravel or water surface. In these two cases, separating the
Goldberg, K.A. |; Tejnil, E.; Bokor, J. |
1995-12-01
A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.
NASA Technical Reports Server (NTRS)
Hassan, M. I.; Kuwana, K.; Saito, K.
2001-01-01
In the past, we measured three-D flow structure in the liquid and gas phases that were created by a spreading flame over liquid fuels. In that effort, we employed several different techniques including our original laser sheet particle tracking (LSPT) technique, which is capable of measuring transient 2-D flow structures. Recently we obtained a state-of-the-art integrated particle image velocimetry (IPIV), whose function is similar to LSPT, but it has an integrated data recording and processing system. To evaluate the accuracy of our IPIV system, we conducted a series of flame spread tests using the same experimental apparatus that we used in our previous flame spread studies and obtained a series of 2-D flow profiles corresponding to our previous LSPT measurements. We confirmed that both LSPT and IPIV techniques produced similar data, but IPIV data contains more detailed flow structures than LSPT data. Here we present some of newly obtained IPIV flow structure data, and discuss the role of gravity in the flame-induced flow structures. Note that the application of IPIV to our flame spread problems is not straightforward, and it required several preliminary tests for its accuracy including this IPIV comparison to LSPT.
Stieghorst, Jan; Majaura, Daniel; Wevering, Hendrik; Doll, Theodor
2016-03-01
The direct fabrication of silicone-rubber-based individually shaped active neural implants requires high-speed-curing systems in order to prevent extensive spreading of the viscous silicone rubber materials during vulcanization. Therefore, an infrared-laser-based test setup was developed to cure the silicone rubber materials rapidly and to evaluate the resulting spreading in relation to its initial viscosity, the absorbed infrared radiation, and the surface tensions of the fabrication bed's material. Different low-adhesion materials (polyimide, Parylene-C, polytetrafluoroethylene, and fluorinated ethylenepropylene) were used as bed materials to reduce the spreading of the silicone rubber materials by means of their well-known weak surface tensions. Further, O2-plasma treatment was performed on the bed materials to reduce the surface tensions. To calculate the absorbed radiation, the emittance of the laser was measured, and the absorptances of the materials were investigated with Fourier transform infrared spectroscopy in attenuated total reflection mode. A minimum silicone rubber spreading of 3.24% was achieved after 2 s curing time, indicating the potential usability of the presented high-speed-curing process for the direct fabrication of thermal-curing silicone rubbers. PMID:26967063
Attribute-based point cloud visualization in support of 3-D classification
NASA Astrophysics Data System (ADS)
Zlinszky, András; Otepka, Johannes; Kania, Adam
2016-04-01
Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large
a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation
NASA Astrophysics Data System (ADS)
Kıvılcım, C. Ö.; Duran, Z.
2016-06-01
The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.
Optimizing the rotating point spread function by SLM aided spiral phase modulation
NASA Astrophysics Data System (ADS)
Baránek, M.; Bouchal, Z.
2014-12-01
We demonstrate the vortex point spread function (PSF) whose shape and the rotation sensitivity to defocusing can be controlled by a phase-only modulation implemented in the spatial or frequency domains. Rotational effects are studied in detail as a result of the spiral modulation carried out in discrete radial and azimuthal sections with different topological charges. As the main result, a direct connection between properties of the PSF and the parameters of the spiral mask is found and subsequently used for an optimal shaping of the PSF and control of its defocusing rotation rate. Experiments on the PSF rotation verify a good agreement with theoretical predictions and demonstrate potential of the method for applications in microscopy, tracking of particles and 3D imaging.
Point spread functions for the Solar optical telescope onboard Hinode
NASA Astrophysics Data System (ADS)
Wedemeyer-Böhm, S.
2008-08-01
Aims: We investigate the combined point spread function (PSF) of the Broadband Filter Imager (BFI) and the Solar Optical Telescope (SOT) onboard the Hinode spacecraft. Methods: Observations of the Mercury transit from November 2006 and the solar eclipse(s) from 2007 are used to determine the PSFs of SOT for the blue, green, and red continuum channels of the BFI. For each channel, we calculate large grids of theoretical point spread functions by convolution of the ideal diffraction-limited PSF and Voigt profiles. These PSFs are applied to artificial images of an eclipse and a Mercury transit. The comparison of the resulting artificial intensity profiles across the terminator and the corresponding observed profiles yields a quality measure for each case. The optimum PSF for each observed image is indicated by the best fit. Results: The observed images of the Mercury transit and the eclipses exhibit a clear proportional relation between the residual intensity and the overall light level in the telescope. In addition, there is an anisotropic stray-light contribution. These two factors make it very difficult to pin down a single unique PSF that can account for all observational conditions. Nevertheless, the range of possible PSF models can be limited by using additional constraints like the pre-flight measurements of the Strehl ratio. Conclusions: The BFI/SOT operate close to the diffraction limit and have only a rather small stray-light contribution. The FWHM of the PSF is broadened by only ~1% with respect to the diffraction-limited case, while the overall Strehl ratio is ~0.8. In view of the large variations - best seen in the residual intensities of eclipse images - and the dependence on the overall light level and position in the FOV, a range of PSFs should be considered instead of a single PSF per wavelength. The individual PSFs of that range allow then the determination of error margins for the quantity under investigation. Nevertheless, the stray
Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping
NASA Astrophysics Data System (ADS)
Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.
2016-06-01
During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable
NASA Astrophysics Data System (ADS)
Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.
2015-05-01
As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made
LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction
NASA Astrophysics Data System (ADS)
Abdullah, S. M.; Awrangjeb, M.; Lu, G.
2014-08-01
Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.
Point spread functions and deconvolution of ultrasonic images.
Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten
2015-03-01
This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation. PMID:25768819
On point spread function modelling: towards optimal interpolation
NASA Astrophysics Data System (ADS)
Bergé, Joel; Price, Sedona; Amara, Adam; Rhodes, Jason
2012-01-01
Point spread function (PSF) modelling is a central part of any astronomy data analysis relying on measuring the shapes of objects. It is especially crucial for weak gravitational lensing, in order to beat down systematics and allow one to reach the full potential of weak lensing in measuring dark energy. A PSF modelling pipeline is made of two main steps: the first one is to assess its shape on stars, and the second is to interpolate it at any desired position (usually galaxies). We focus on the second part, and compare different interpolation schemes, including polynomial interpolation, radial basis functions, Delaunay triangulation and Kriging. For that purpose, we develop simulations of PSF fields, in which stars are built from a set of basis functions defined from a principal components analysis of a real ground-based image. We find that Kriging gives the most reliable interpolation, significantly better than the traditionally used polynomial interpolation. We also note that although a Kriging interpolation on individual images is enough to control systematics at the level necessary for current weak lensing surveys, more elaborate techniques will have to be developed to reach future ambitious surveys' requirements.
NASA Astrophysics Data System (ADS)
Rajendra, Y. D.; Mehrotra, S. C.; Kale, K. V.; Manza, R. R.; Dhumal, R. K.; Nagne, A. D.; Vibhute, A. D.
2014-11-01
Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object's surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.
NASA Astrophysics Data System (ADS)
Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.
2016-06-01
High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.
Localizing edges for estimating point spread function by removing outlier points
NASA Astrophysics Data System (ADS)
Li, Yong; Xu, Liangpeng; Jin, Hongbin; Zou, Junwei
2016-02-01
This paper presents an approach to detect sharp edges for estimating point spread function (PSF) of a lens. A category of PSF estimation methods detect sharp edges from low-resolution (LR) images and estimate PSF with the detected edges. Existing techniques usually rely on accurate detection of ending points of the profile normal to an edge. In practice, however, it is often very difficult to localize profiles accurately. Inaccurately localized profiles generate a poor PSF estimation. We employ the Random Sample Consensus (RANSAC) algorithm to rule out outlier points. In RANSAC, prior knowledge about a pattern shape is incorporated, and the edge points lying far away from the pattern shape will be removed. The proposed method is tested on images of saddle patterns. Experimental results show that the proposed method can robustly localize sharp edges from LR saddle pattern images and yield accurate PSF estimation.
Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.
2016-06-01
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Point Spread Function (PSF) noise filter strategy for geiger mode LiDAR
NASA Astrophysics Data System (ADS)
Smith, O'Neil; Stark, Robert; Smith, Philip; St. Romain, Randall; Blask, Steven
2013-05-01
LiDAR is an efficient optical remote sensing technology that has application in geography, forestry, and defense. The effectiveness is often limited by signal-to-noise ratio (SNR). Geiger mode avalanche photodiode (APD) detectors are able to operate above critical voltage, and a single photoelectron can initiate the current surge, making the device very sensitive. These advantages come at the expense of requiring computationally intensive noise filtering techniques. Noise is a problem which affects the imaging system and reduces the capability. Common noise-reduction algorithms have drawbacks such as over aggressive filtering, or decimating in order to improve quality and performance. In recent years, there has been growing interest on GPUs (Graphics Processing Units) for their ability to perform powerful massive parallel processing. In this paper, we leverage this capability to reduce the processing latency. The Point Spread Function (PSF) filter algorithm is a local spatial measure that has been GPGPU accelerated. The idea is to use a kernel density estimation technique for point clustering. We associate a local likelihood measure with every point of the input data capturing the probability that a 3D point is true target-return photons or noise (background photons, dark-current). This process suppresses noise and allows for detection of outliers. We apply this approach to the LiDAR noise filtering problem for which we have recognized a speed-up factor of 30-50 times compared to traditional sequential CPU implementation.
Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3 As2 crystals
NASA Astrophysics Data System (ADS)
Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiongjun; Xie, Xincheng; Wei, Jian; Wang, Jian
The 3D Dirac semimetal state is located at the topological phase boundary and can potentially be driven into other topological phases including topological insulator, topological metal and the long-pursuit topological superconductor states. Crystalline Cd3As2 has been proposed and proved to be one of 3D Dirac semimetals which can survive in atmosphere. By precisely controlled point contact (PC) measurements, we observe the exotic superconductivity in the vicinity of the point contact region on the surface of Cd3As2 crystal, which might be induced by the local pressure in the out-of-plane direction from the metallic tip for PC. The observation of zero bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric to zero bias further reveals p-wave like unconventional superconductivity in Cd3As2. Considering the special topological property of the 3D Dirac semimetal, our findings may indicate that the Cd3As2 crystal under certain conditions is a candidate of topological superconductor, which is predicted to support Majorana zero modes or gapless Majorana edge/surface modes on the boundary depending on the dimensionality of the material. This work was financially supported by the National Basic Research Program of China (Greanted Nos. 2012CB927400).
Effects of cyclone diameter on performance of 1D3D cyclones: Cut point and slope
Technology Transfer Automated Retrieval System (TEKTRAN)
Cyclones are a commonly used air pollution abatement device for separating particulate matter (PM) from air streams in industrial processes. Several mathematical models have been proposed to predict the cut point of cyclones as cyclone diameter varies. The objective of this research was to determine...
NASA Astrophysics Data System (ADS)
Gac, Sébastien; Dyment, Jérôme; Tisseau, Chantal; Goslin, Jean
2003-09-01
The axial magnetic anomaly amplitude along Mid-Atlantic Ridge segments is systematically twice as high at segment ends compared with segment centres. Various processes have been proposed to account for such observations, either directly or indirectly related to the thermal structure of the segments: (1) shallower Curie isotherm at segment centres, (2) higher Fe-Ti content at segment ends, (3) serpentinized peridotites at segment ends or (4) a combination of these processes. In this paper the contribution of each of these processes to the axial magnetic anomaly amplitude is quantitatively evaluated by achieving a 3-D numerical modelling of the magnetization distribution and a magnetic anomaly over a medium-sized, 50 km long segment. The magnetization distribution depends on the thermal structure and thermal evolution of the lithosphere. The thermal structure is calculated considering the presence of a permanent hot zone beneath the segment centre. The `best-fitting' thermal structure is determined by adjusting the parameters (shape, size, depth, etc.) of this hot zone, to fit the modelled geophysical outputs (Mantle Bouguer anomaly, maximum earthquake depths and crustal thickness) to the observations. Both the thermoremanent magnetization, acquired during the thermal evolution, and the induced magnetization, which depends on the present thermal structure, are modelled. The resulting magnetic anomalies are then computed and compared with the observed ones. This modelling exercise suggests that, in the case of aligned and slightly offset segments, a combination of higher Fe-Ti content and the presence of serpentinized peridotites at segment ends will produce the observed higher axial magnetic anomaly amplitudes over the segment ends. In the case of greater offsets, the presence of serpentinized peridotites at segment ends is sufficient to account for the observations.
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Sammartano, G.; Spanò, A.
2016-06-01
This paper retraces some research activities and application of 3D survey techniques and Building Information Modelling (BIM) in the environment of Cultural Heritage. It describes the diffusion of as-built BIM approach in the last years in Heritage Assets management, the so-called Built Heritage Information Modelling/Management (BHIMM or HBIM), that is nowadays an important and sustainable perspective in documentation and administration of historic buildings and structures. The work focuses the documentation derived from 3D survey techniques that can be understood like a significant and unavoidable knowledge base for the BIM conception and modelling, in the perspective of a coherent and complete management and valorisation of CH. It deepens potentialities, offered by 3D integrated survey techniques, to acquire productively and quite easilymany 3D information, not only geometrical but also radiometric attributes, helping the recognition, interpretation and characterization of state of conservation and degradation of architectural elements. From these data, they provide more and more high descriptive models corresponding to the geometrical complexity of buildings or aggregates in the well-known 5D (3D + time and cost dimensions). Points clouds derived from 3D survey acquisition (aerial and terrestrial photogrammetry, LiDAR and their integration) are reality-based models that can be use in a semi-automatic way to manage, interpret, and moderately simplify geometrical shapes of historical buildings that are examples, as is well known, of non-regular and complex geometry, instead of modern constructions with simple and regular ones. In the paper, some of these issues are addressed and analyzed through some experiences regarding the creation and the managing of HBIMprojects on historical heritage at different scales, using different platforms and various workflow. The paper focuses on LiDAR data handling with the aim to manage and extract geometrical information; on
The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design
NASA Astrophysics Data System (ADS)
Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas
2011-03-01
The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.
A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2016-04-01
Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of
WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading
Remmes, N; Courneyea, L; Corner, S; Beltran, C; Kemp, B; Kruse, J; Herman, M; Stoker, J
2014-06-15
Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak, 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.
NASA Astrophysics Data System (ADS)
Tajbakhsh, Touraj
2010-02-01
A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.
NASA Astrophysics Data System (ADS)
Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.
2016-06-01
Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.
NASA Astrophysics Data System (ADS)
Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao
2015-05-01
Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.
3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds
NASA Astrophysics Data System (ADS)
Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.
2012-12-01
The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.
Design point variation of 3-D loss and deviation for axial compressor middle stages
NASA Technical Reports Server (NTRS)
Roberts, William B.; Serovy, George K.; Sandercock, Donald M.
1988-01-01
The available data on middle-stage research compressors operating near design point are used to derive simple empirical models for the spanwise variation of three-dimensional viscous loss coefficients for middle-stage axial compressor blading. The models make it possible to quickly estimate the total loss and deviation across the blade span when the three-dimensional distribution is superimposed on the two-dimensional variation calculated for each blade element. It is noted that extrapolated estimates should be used with caution since the correlations have been derived from a limited data base.
An exact solution for the 3D MHD stagnation-point flow of a micropolar fluid
NASA Astrophysics Data System (ADS)
Borrelli, A.; Giantesio, G.; Patria, M. C.
2015-01-01
The influence of a non-uniform external magnetic field on the steady three dimensional stagnation-point flow of a micropolar fluid over a rigid uncharged dielectric at rest is studied. The total magnetic field is parallel to the velocity at infinity. It is proved that this flow is possible only in the axisymmetric case. The governing nonlinear partial differential equations are reduced to a system of ordinary differential equations by a similarity transformation, before being solved numerically. The effects of the governing parameters on the fluid flow and on the magnetic field are illustrated graphically and discussed.
Real-time estimation of FLE statistics for 3-D tracking with point-based registration.
Wiles, Andrew D; Peters, Terry M
2009-09-01
Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option
NASA Astrophysics Data System (ADS)
Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain
2016-04-01
The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and
Simulations of 3D Magnetic Merging: Resistive Scalings for Null Point and QSL Reconnection
NASA Astrophysics Data System (ADS)
Effenberger, Frederic; Craig, I. J. D.
2016-01-01
Starting from an exact, steady-state, force-free solution of the magnetohydrodynamic (MHD) equations, we investigate how resistive current layers are induced by perturbing line-tied three-dimensional magnetic equilibria. This is achieved by the superposition of a weak perturbation field in the domain, in contrast to studies where the boundary is driven by slow motions, like those present in photospheric active regions. Our aim is to quantify how the current structures are altered by the contribution of so-called quasi-separatrix layers (QSLs) as the null point is shifted outside the computational domain. Previous studies based on magneto-frictional relaxation have indicated that despite the severe field line gradients of the QSL, the presence of a null is vital in maintaining fast reconnection. Here, we explore this notion using highly resolved simulations of the full MHD evolution. We show that for the null-point configuration, the resistive scaling of the peak current density is close to J˜η^{-1}, while the scaling is much weaker, i.e. J˜η^{-0.4}, when only the QSL connectivity gradients provide a site for the current accumulation.
Hinode observations and 3D magnetic structure of an X-ray bright point
NASA Astrophysics Data System (ADS)
Alexander, C. E.; Del Zanna, G.; Maclean, R. C.
2011-02-01
Aims: We present complete Hinode Solar Optical Telescope (SOT), X-Ray Telescope (XRT)and EUV Imaging Spectrometer (EIS) observations of an X-ray bright point (XBP) observed on the 10, 11 of October 2007 over its entire lifetime (~12 h). We aim to show how the measured plasma parameters of the XBP change over time and also what kind of similarities the X-ray emission has to a potential magnetic field model. Methods: Information from all three instruments on-board Hinode was used to study its entire evolution. XRT data was used to investigate the structure of the bright point and to measure the X-ray emission. The EIS instrument was used to measure various plasma parameters over the entire lifetime of the XBP. Lastly, the SOT was used to measure the magnetic field strength and provide a basis for potential field extrapolations of the photospheric fields to be made. These were performed and then compared to the observed coronal features. Results: The XBP measured ~15´´ in size and was found to be formed directly above an area of merging and cancelling magnetic flux on the photosphere. A good correlation between the rate of X-ray emission and decrease in total magnetic flux was found. The magnetic fragments of the XBP were found to vary on very short timescales (minutes), however the global quasi-bipolar structure remained throughout the lifetime of the XBP. The potential field extrapolations were a good visual fit to the observed coronal loops in most cases, meaning that the magnetic field was not too far from a potential state. Electron density measurements were obtained using a line ratio of Fe XII and the average density was found to be 4.95 × 109 cm-3 with the volumetric plasma filling factor calculated to have an average value of 0.04. Emission measure loci plots were then used to infer a steady temperature of log Te [ K] ~ 6.1. The calculated Fe XII Doppler shifts show velocity changes in and around the bright point of ±15 km s-1 which are observed to change
NASA Astrophysics Data System (ADS)
Arribas, Victor; Casas, Lluís; Estop, Eugènia; Labrador, Manuel
2014-01-01
Crystallography and X-ray diffraction techniques are essential topics in geosciences and other solid-state sciences. Their fundamentals, which include point symmetry groups, are taught in the corresponding university courses. In-depth meaningful learning of symmetry concepts is difficult and requires capacity for abstraction and spatial vision. Traditionally, wooden crystallographic models are used as support material. In this paper, we describe a new interactive tool, freely available, inspired in such models. Thirty-two PDF files containing embedded 3D models have been created. Each file illustrates a point symmetry group and can be used to teach/learn essential symmetry concepts and the International Hermann-Mauguin notation of point symmetry groups. Most interactive computer-aided tools devoted to symmetry deal with molecular symmetry and disregard crystal symmetry so we have developed a tool that fills the existing gap.
Absence of Critical Points of Solutions to the Helmholtz Equation in 3D
NASA Astrophysics Data System (ADS)
Alberti, Giovanni S.
2016-05-01
The focus of this paper is to show the absence of critical points for the solutions to the Helmholtz equation in a bounded domain {Ωsubset{R}3} , given by div(a nabla u_{ω}g)-ω qu_{ω}g=0&quad {in Ω,} u_{ω}g=g&quad{on partialΩ.} We prove that for an admissible g there exists a finite set of frequencies K in a given interval and an open cover {overline{Ω}=\\cup_{ωin K} Ω_{ω}} such that {|nabla u_{ω}g(x)| > 0} for every {ωin K} and {xinΩ_{ω}} . The set K is explicitly constructed. If the spectrum of this problem is simple, which is true for a generic domain {Ω} , the admissibility condition on g is a generic property.
Lee, Myung W.
2005-01-01
In order to assess the resource potential of gas hydrate deposits in the North Slope of Alaska, 3-D seismic and well data at Milne Point were obtained from BP Exploration (Alaska), Inc. The well-log analysis has three primary purposes: (1) Estimate gas hydrate or gas saturations from the well logs; (2) predict P-wave velocity where there is no measured P-wave velocity in order to generate synthetic seismograms; and (3) edit P-wave velocities where degraded borehole conditions, such as washouts, affected the P-wave measurement significantly. Edited/predicted P-wave velocities were needed to map the gas-hydrate-bearing horizons in the complexly faulted upper part of 3-D seismic volume. The estimated gas-hydrate/gas saturations from the well logs were used to relate to seismic attributes in order to map regional distribution of gas hydrate inside the 3-D seismic grid. The P-wave velocities were predicted using the modified Biot-Gassmann theory, herein referred to as BGTL, with gas-hydrate saturations estimated from the resistivity logs, porosity, and clay volume content. The effect of gas on velocities was modeled using the classical Biot-Gassman theory (BGT) with parameters estimated from BGTL.
NASA Astrophysics Data System (ADS)
Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria
2015-04-01
Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
The effect of load on torques in point-to-point arm movements: a 3D model.
Tibold, Robert; Laczko, Jozsef
2012-01-01
A dynamic, 3-dimensional model was developed to simulate slightly restricted (pronation-supination was not allowed) point-to-point movements of the upper limb under different external loads, which were modeled using 3 objects of distinct masses held in the hand. The model considered structural and biomechanical properties of the arm and measured coordinates of joint positions. The model predicted muscle torques generated by muscles and needed to produce the measured rotations in the shoulder and elbow joints. The effect of different object masses on torque profiles, magnitudes, and directions were studied. Correlation analysis has shown that torque profiles in the shoulder and elbow joints are load invariant. The shape of the torque magnitude-time curve is load invariant but it is scaled with the mass of the load. Objects with larger masses are associated with a lower deflection of the elbow torque with respect to the sagittal plane. Torque direction-time curve is load invariant scaled with the mass of the load. The authors propose that the load invariance of the torque magnitude-time curve and torque direction-time curve holds for object transporting arm movements not restricted to a plane. PMID:22938084
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.
Investigating the usage of point spread functions in point source and microsphere localization
NASA Astrophysics Data System (ADS)
Chao, Jerry; Ram, Sripad; Ward, E. Sally; Ober, Raimund J.
2016-03-01
Using a point spread function (PSF) to localize a point-like object, such as a fluorescent molecule or microsphere, represents a common task in single molecule microscopy image data analysis. The localization may differ in purpose depending on the application or experiment, but a unifying theme is the importance of being able to closely recover the true location of the point-like object with high accuracy. We present two simulation studies, both relating to the performance of object localization via the maximum likelihood fitting of a PSF to the object's image. In the first study, we investigate the integration of the PSF over an image pixel, which represents a critical part of the localization algorithm. Specifically, we explore how the fineness of the integration affects how well a point source can be localized, and find the use of too coarse a step size to produce location estimates that are far from the true location, especially when the images are acquired at relatively low magnifications. We also propose a method for selecting an appropriate step size. In the second study, we investigate the suitability of the common practice of using a PSF to localize a microsphere, despite the mismatch between the microsphere's image and the fitted PSF. Using criteria based on the standard errors of the mean and variance, we find the method suitable for microspheres up to 1 μm and 100 nm in diameter, when the localization is performed, respectively, with and without the simultaneous estimation of the width of the PSF.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Vosselman, George
2015-07-01
Point clouds generated from airborne oblique images have become a suitable source for detailed building damage assessment after a disaster event, since they provide the essential geometric and radiometric features of both roof and façades of the building. However, they often contain gaps that result either from physical damage or from a range of image artefacts or data acquisition conditions. A clear understanding of those reasons, and accurate classification of gap-type, are critical for 3D geometry-based damage assessment. In this study, a methodology was developed to delineate buildings from a point cloud and classify the present gaps. The building delineation process was carried out by identifying and merging the roof segments of single buildings from the pre-segmented 3D point cloud. This approach detected 96% of the buildings from a point cloud generated using airborne oblique images. The gap detection and classification methods were tested using two other data sets obtained with Unmanned Aerial Vehicle (UAV) images with a ground resolution of around 1-2 cm. The methods detected all significant gaps and correctly identified the gaps due to damage. The gaps due to damage were identified based on the surrounding damage pattern, applying Gabor wavelets and a histogram of gradient orientation features. Two learning algorithms - SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors. The learning model based on Gabor features with Random Forests performed best, identifying 95% of the damaged regions. The generalization performance of the supervised model, however, was less successful: quality measures decreased by around 15-30%.
Polarization Aberrations in Astronomical Telescopes: The Point Spread Function
NASA Astrophysics Data System (ADS)
Breckinridge, James B.; Lam, Wai Sze T.; Chipman, Russell A.
2015-05-01
Detailed knowledge of the image of the point spread function (PSF) is necessary to optimize astronomical coronagraph masks and to understand potential sources of errors in astrometric measurements. The PSF for astronomical telescopes and instruments depends not only on geometric aberrations and scalar wave diffraction but also on those wavefront errors introduced by the physical optics and the polarization properties of reflecting and transmitting surfaces within the optical system. These vector wave aberrations, called polarization aberrations, result from two sources: (1) the mirror coatings necessary to make the highly reflecting mirror surfaces, and (2) the optical prescription with its inevitable non-normal incidence of rays on reflecting surfaces. The purpose of this article is to characterize the importance of polarization aberrations, to describe the analytical tools to calculate the PSF image, and to provide the background to understand how astronomical image data may be affected. To show the order of magnitude of the effects of polarization aberrations on astronomical images, a generic astronomical telescope configuration is analyzed here by modeling a fast Cassegrain telescope followed by a single 90° deviation fold mirror. All mirrors in this example use bare aluminum reflective coatings and the illumination wavelength is 800 nm. Our findings for this example telescope are: (1) The image plane irradiance distribution is the linear superposition of four PSF images: one for each of the two orthogonal polarizations and one for each of two cross-coupled polarization terms. (2) The PSF image is brighter by 9% for one polarization component compared to its orthogonal state. (3) The PSF images for two orthogonal linearly polarization components are shifted with respect to each other, causing the PSF image for unpolarized point sources to become slightly elongated (elliptical) with a centroid separation of about 0.6 mas. This is important for both astrometry
A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface
Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue
2015-01-01
Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112
NASA Astrophysics Data System (ADS)
Vauhkonen, J.
2015-03-01
Reconstruction of three-dimensional (3D) forest canopy is described and quantified using airborne laser scanning (ALS) data with densities of 0.6-0.8 points m-2 and field measurements aggregated at resolutions of 400-900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty) space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i) to optimize the degree of filtration with respect to the field measurements, and (ii) to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2) with the stem volume considered, both alone (R2=0.65) and together with other predictors (R2=0.78). When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.
Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2016-06-01
The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.
NASA Astrophysics Data System (ADS)
Hu, Xie; Wang, Teng; Liao, Mingsheng
2013-12-01
SAR Interferometry (InSAR) has its unique advantages, e.g., all weather/time accessibility, cm-level accuracy and large spatial coverage, however, it can only obtain one dimensional measurement along line-of-sight (LOS) direction. Offset tracking is an important complement to measure large and rapid displacements in both azimuth and range directions. Here we perform offset tracking on detected point-like targets (PT) by calculating the cross-correlation with a sinc-like template. And a complete 3-D displacement field can be derived using PT offset tracking from a pair of ascending and descending data. The presented case study on 2010 M7.2 El Mayor-Cucapah earthquake helps us better understand the rupture details.
3-D seismic over the Fausse Pointe Field: A case history of acquisition in a harsh environment
Duncan, P.M.; Nester, D.C.; Martin, J.A.; Moles, J.R.
1995-12-31
A 50 square mile 3D seismic survey was successfully acquired over Fausse Point Field in the latter half of 1994. The geophysical and logistical challenges of this project were immense. The steep dips and extensive range of target depths required a large shoot area with a relatively fine sampling interval. The surface, while essentially flat, included areas of cane field, crawfish ponds, thick brush, swamp, open lakes and deep canals -- all typical of southern Louisiana. Planning and permitting of the survey began in late 1993. Field operations began in June 1994 and were complete in January 1995. Field personnel numbered 150 at the peak of operations. More than 19,000 crew hours were required to complete the job at a cost of over 5,000,000. The project was complete on time and on budget. The resulting images of the salt dome and surrounding rocks are not only beautiful but are revealing many opportunities for new hydrocarbon development.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
NASA Astrophysics Data System (ADS)
Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel
2015-04-01
Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction
Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh
2016-01-01
Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832
Crustal thickness from 3D MCS data collected over the fast-spreading East Pacific Rise at 9°50'N
NASA Astrophysics Data System (ADS)
Aghaei, O.; Nedimović, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.
2011-12-01
We compute, analyze and present crustal thickness variations for a section of the fast-spreading East Pacific Rise (EPR). The area of 3D coverage is between 9°38'N and 9°58' N (~1000 km2), where the documented eruptions of 1990-91 and 2005-06 occurred. The crustal thickness is computed by depth converting the two-way reflection travel times from the seafloor to the Moho. The seafloor and Moho reflections are picked on the migrated stack volume produced from the 3D multichannel seismic (MCS) data collected on R/V Marcus G. Langseth in summer of 2008 during cruise MGL0812. The crustal velocities used for depth conversion were computed by Canales et al. (2003; 2011) by simultaneous inversion of seismic refractions and wide-angle Moho reflection traveltimes from four ridge-parallel and one ridge-perpendicular ocean bottom seismometer (OBS) profile for which data were collected during the 1998 UNDERSHOOT experiment. The MCS data analysis included 1D and 2D filtering, offset-dependent spherical divergence correction, surface-consistent amplitude correction, common midpoint (CMP) sort with flex binning, velocity analysis, normal moveout, and CMP stretch mute. The poststack processing includes seafloor multiple mute and 3D Kirchhoff poststack time migration. Here we use the crustal thickness and Moho seismic signature variations to detail their relationship with ridge segmentation, crustal age, bathymetry, and on- and off-axis magmatism. On the western flank (Pacific plate) from 9°41' to 9°48', the Moho reflection is strong. From 9°48' to 9°52', the Moho reflection varies from moderate to weak and disappears from ~3 km to ~9 km from the ridge axis. On the eastern flank (Cocos plate) from 9°41' to 9°51', the Moho reflection varies from strong to moderate. From 9°51' to 9°54' the Moho reflection varies from moderate to weak and disappears beneath a region ~3 km to ~9 km from the axis. On the Cocos plate, across-axis crustal thickness variations (5.5-6.2 km) show a
Point spread function of the optical needle super-oscillatory lens
Roy, Tapashree; Rogers, Edward T. F.; Yuan, Guanghui; Zheludev, Nikolay I.
2014-06-09
Super-oscillatory optical lenses are known to achieve sub-wavelength focusing. In this paper, we analyse the imaging capabilities of a super-oscillatory lens by studying its point spread function. We experimentally demonstrate that a super-oscillatory lens can generate a point spread function 24% smaller than that dictated by the diffraction limit and has an effective numerical aperture of 1.31 in air. The object-image linear displacement property of these lenses is also investigated.
NASA Astrophysics Data System (ADS)
Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.
2016-06-01
This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.
NASA Astrophysics Data System (ADS)
Madeo, Angela; Ferretti, Manuel; dell'Isola, Francesco; Boisse, Philippe
2015-08-01
In this paper, we propose to use a second gradient, 3D orthotropic model for the characterization of the mechanical behavior of thick woven composite interlocks. Such second-gradient theory is seen to directly account for the out-of-plane bending rigidity of the yarns at the mesoscopic scale which is, in turn, related to the bending stiffness of the fibers composing the yarns themselves. The yarns' bending rigidity evidently affects the macroscopic bending of the material and this fact is revealed by presenting a three-point bending test on specimens of composite interlocks. These specimens differ one from the other for the different relative direction of the yarns with respect to the edges of the sample itself. Both types of specimens are independently seen to take advantage of a second-gradient modeling for the correct description of their macroscopic bending modes. The results presented in this paper are essential for the setting up of a correct continuum framework suitable for the mechanical characterization of composite interlocks. The few second-gradient parameters introduced by the present model are all seen to be associated with peculiar deformation modes of the mesostructure (bending of the yarns) and are determined by inverse approach. Although the presented results undoubtedly represent an important step toward the complete characterization of the mechanical behavior of fibrous composite reinforcements, more complex hyperelastic second-gradient constitutive laws must be conceived in order to account for the description of all possible mesostructure-induced deformation patterns.
NASA Astrophysics Data System (ADS)
Leunissen, Leonardus H. A.; Gronheid, Roel; Gao, Weimin
2006-06-01
Extreme ultraviolet lithography (EUVL) uses a reflective mask with a multilayer coating. Therefore, the illumination is an off-axis ring field system that is non-telecentric on the mask side. This non-zero angle of incidence combined with the three-dimensional mask topography results in the so-called "shadowing effect". The shadowing causes the printed CD to depend on the orientation as well as on the position in the slit and it will significantly influence the image formation [1,2]. In addition, simulations show that the Bossung curves are asymmetrical due to 3-D mask effects and their best focus depends on the shadowing angle [3]. Such tilts in the Bossung curves are usually associated with aberrations in the optical system. In this paper, we describe an approach in which both properties can be disentangled. Bossung curve simulations with varying effective angles of incidence (between 0 and 6 degrees) show that at discrete defocus offsets, the printed linewidth is independent of the incident angle (and thus independent of the shadowing effect), the so-called iso-sciatic (constant shadowing) point. For an ideal optical system this means that the size of a printed feature with a given mask-CD and orientation does not change through slit. With a suitable test structure it is possible to use this effect to distinguish between mask topography and imaging effects from aberrations through slit. Simulations for the following aberrations tested the approach: spherical, coma and astigmatism.
NASA Astrophysics Data System (ADS)
Kim, Jaewook; Ghim, Young-Chul; Nuclear Fusion and Plasma Lab Team
2014-10-01
A BES (beam emission spectroscopy) system and an MIR (Microwave Imaging Reflectometer) system installed in KSTAR measure 2D (radial and poloidal) density fluctuations at two different toroidal locations. This gives a possibility of measuring the parallel correlation length of ion-scale turbulence in KSTAR. Due to lack of measurement points in toroidal direction and shorter separation distance between the diagnostics compared to an expected parallel correlation length, it is necessary to confirm whether a conventional statistical method, i.e., using a cross-correlation function, is valid for measuring the parallel correlation length. For this reason, we generated synthetic 3D density fluctuation data following Gaussian random field in a toroidal coordinate system that mimic real density fluctuation data. We measure the correlation length of the synthetic data by fitting a Gaussian function to the cross-correlation function. We observe that there is disagreement between the measured and actual correlation lengths, and the degree of disagreement is a function of at least, correlation length, correlation time and advection velocity of synthetic data. We identify the cause of disagreement and propose an appropriate method to measure correct correlation length.
NASA Astrophysics Data System (ADS)
Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben
2014-09-01
Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.
Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou
2014-02-01
fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry. PMID:24822422
Evaluation of centricity of optical elements by using a point spread function
Miks, Antonin; Novak, Jiri; Novak, Pavel
2008-06-20
Our work describes a technique for testing the centricity of optical systems by using the point spread function. It is shown that a specific position of an axial object point can be found for every optical element, where the spherical aberration is either zero or minimal. If we image such a point with an optical element, then its point spread function will be almost identical to the point spread function of the diffraction-limited optical system. This consequence can be used for testing the centricity of precisely fabricated optical elements, because we can simply detect asymmetry of the point spread function, which is caused by the decentricity of the tested optical element. One can also use this method for testing optical elements in connection with a cementing process. Moreover, a simple formula is also derived for calculation of the coefficient of third-order coma, which is caused by the decentricity of the optical surface due to a tilt of the surface with respect to the optical axis, and a simple method for detecting the asymmetry of the point spread function is proposed.
NASA Astrophysics Data System (ADS)
Kawata, Yoshiyuki; Koizumi, Kohei
2014-10-01
The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.
NASA Astrophysics Data System (ADS)
Meulien Ohlmann, Odile
2013-02-01
Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?
A method of PSF generation for 3D brightfield deconvolution.
Tadrous, P J
2010-02-01
This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function. PMID:20096049
Hubble Space Telescope Faint Object Camera calculated point-spread functions.
Lyon, R G; Dorband, J E; Hollis, J M
1997-03-10
A set of observed noisy Hubble Space Telescope Faint Object Camera point-spread functions is used to recover the combined Hubble and Faint Object Camera wave-front error. The low-spatial-frequency wave-front error is parameterized in terms of a set of 32 annular Zernike polynomials. The midlevel and higher spatial frequencies are parameterized in terms of set of 891 polar-Fourier polynomials. The parameterized wave-front error is used to generate accurate calculated point-spread functions, both pre- and post-COSTAR (corrective optics space telescope axial replacement), suitable for image restoration at arbitrary wavelengths. We describe the phase-retrieval-based recovery process and the phase parameterization. Resultant calculated precorrection and postcorrection point-spread functions are shown along with an estimate of both pre- and post-COSTAR spherical aberration. PMID:18250862
Scattering and the Point Spread Function of the New Generation Space Telescope
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1996-01-01
Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called
NASA Astrophysics Data System (ADS)
Beeler, F.; Andersen, O. K.; Scheffler, M.
1990-01-01
We describe spin-unrestricted self-consistent linear muffin-tin-orbital (LMTO) Green-function calculations for Sc, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu transition-metal impurities in crystalline silicon. Both defect sites of tetrahedral symmetry are considered. All possible charge states with their spin multiplicities, magnetization densities, and energy levels are discussed and explained with a simple physical picture. The early transition-metal interstitial and late transition-metal substitutional 3d ions are found to have low spin. This is in conflict with the generally accepted crystal-field model of Ludwig and Woodbury, but not with available experimental data. For the interstitial 3d ions, the calculated deep donor and acceptor levels reproduce all experimentally observed transitions. For substitutional 3d ions, a large number of predictions is offered to be tested by future experimental studies.
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick
2014-01-01
3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…
ERIC Educational Resources Information Center
Smith, Garon C.; Hossain, Md Mainul
2016-01-01
BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…
The point spread function of the soft X-ray telescope aboard Yohkoh
NASA Technical Reports Server (NTRS)
Martens, Petrus C.; Acton, Loren W.; Lemen, James R.
1995-01-01
The point spread function of the SXT telescope aboard Yohkoh has been measured in flight configuration in three different X-ray lines at White Sands Missile Range. We have fitted these data with an elliptical generalization of the Moffat function. Our fitting method consists of chi squared minimization in Fourier space, especially designed for matching of sharply peaked functions. We find excellent fits with a reduced chi squared of order unity or less for single exposure point spread functions over most of the CCD. Near the edges of the CCD the fits are less accurate due to vignetting. From fitting results with summation of multiple exposures we find a systematic error in the fitting function of the order of 3% near the peak of the point spread function, which is close to the photon noise for typical SXT images in orbit. We find that the full width to half maximum and fitting parameters vary significantly with CCD location. However, we also find that point spread functions measured at the same location are consistent to one another within the limit determined by photon noise. A 'best' analytical fit to the PSF as function of position on the CCD is derived for use in SXT image enhancemnent routines. As an aside result we have found that SXT can determine the location of point sources to about a quarter of a 2.54 arc sec pixel.
Inks, T.L.; Agena, W.F.
2008-01-01
In February 2007, the Mt. Elbert Prospect stratigraphic test well, Milne Point, North Slope Alaska encountered thick methane gas hydrate intervals, as predicted by 3D seismic interpretation and modeling. Methane gas hydrate-saturated sediment was found in two intervals, totaling more than 100 ft., identified and mapped based on seismic character and wavelet modeling.
Ghosh, Sreya; Preza, Chrysanthe
2015-07-01
A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6 μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction. PMID:26154937
NASA Astrophysics Data System (ADS)
Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves
2015-04-01
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
NASA Astrophysics Data System (ADS)
Poręba, M.; Goulette, F.
2014-12-01
The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.
NASA Astrophysics Data System (ADS)
Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich
2000-06-01
The purpose of individual 3D region-of-interest atlas extraction is to automatically define anatomically meaningful regions in 3D MRI images for quantification of functional parameters (PET, SPECT: rMRGlu, rCBF). The first step of atlas extraction is to automatically classify brain tissue types into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB) and background (BG). A feed-forward neural network with back-propagation training algorithm is used and compared to other numerical classifiers. It can be trained by a sample from the individual patient data set in question. Classification is done by a 'winner takes all' decision. Automatic extraction of a user-specified number of training points is done in a cross-sectional slice. Background separation is done by simple region growing. The most homogeneous voxels define the region for WM training point extraction (TPE). Non-white-matter and nonbackground regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is one feature. For each class, spatially uniformly distributed training points are extracted by a random generator from these regions. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated. The resulting class images can be analyzed for extraction of anatomical ROIs.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the
3D printed diffractive terahertz lenses.
Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław
2016-04-15
A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335
NASA Astrophysics Data System (ADS)
Su, Liang; Lu, Gang; Kenens, Bart; Rocha, Susana; Fron, Eduard; Yuan, Haifeng; Chen, Chang; van Dorpe, Pol; Roeffaers, Maarten B. J.; Mizuno, Hideaki; Hofkens, Johan; Hutchison, James A.; Uji-I, Hiroshi
2015-02-01
The enhancement of molecular absorption, emission and scattering processes by coupling to surface plasmon polaritons on metallic nanoparticles is a key issue in plasmonics for applications in (bio)chemical sensing, light harvesting and photocatalysis. Nevertheless, the point spread functions for single-molecule emission near metallic nanoparticles remain difficult to characterize due to fluorophore photodegradation, background emission and scattering from the plasmonic structure. Here we overcome this problem by exciting fluorophores remotely using plasmons propagating along metallic nanowires. The experiments reveal a complex array of single-molecule fluorescence point spread functions that depend not only on nanowire dimensions but also on the position and orientation of the molecular transition dipole. This work has consequences for both single-molecule regime-sensing and super-resolution imaging involving metallic nanoparticles and opens the possibilities for fast size sorting of metallic nanoparticles, and for predicting molecular orientation and binding position on metallic nanoparticles via far-field optical imaging.
STRONG GRAVITATIONAL LENS MODELING WITH SPATIALLY VARIANT POINT-SPREAD FUNCTIONS
Rogers, Adam; Fiege, Jason D.
2011-12-10
Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.
Strong Gravitational Lens Modeling with Spatially Variant Point-spread Functions
NASA Astrophysics Data System (ADS)
Rogers, Adam; Fiege, Jason D.
2011-12-01
Astronomical instruments generally possess spatially variant point-spread functions, which determine the amount by which an image pixel is blurred as a function of position. Several techniques have been devised to handle this variability in the context of the standard image deconvolution problem. We have developed an iterative gravitational lens modeling code called Mirage that determines the parameters of pixelated source intensity distributions for a given lens model. We are able to include the effects of spatially variant point-spread functions using the iterative procedures in this lensing code. In this paper, we discuss the methods to include spatially variant blurring effects and test the results of the algorithm in the context of gravitational lens modeling problems.
Uav-Based Acquisition of 3d Point Cloud - a Comparison of a Low-Cost Laser Scanner and Sfm-Tools
NASA Astrophysics Data System (ADS)
Mader, D.; Blaskow, R.; Westfeld, P.; Maas, H.-G.
2015-08-01
The Project ADFEX (Adaptive Federative 3D Exploration of Multi Robot System) pursues the goal to develop a time- and cost-efficient system for exploration and monitoring task of unknown areas or buildings. A fleet of unmanned aerial vehicles equipped with appropriate sensors (laser scanner, RGB camera, near infrared camera, thermal camera) were designed and built. A typical operational scenario may include the exploration of the object or area of investigation by an UAV equipped with a laser scanning range finder to generate a rough point cloud in real time to provide an overview of the object on a ground station as well as an obstacle map. The data about the object enables the path planning for the robot fleet. Subsequently, the object will be captured by a RGB camera mounted on the second flying robot for the generation of a dense and accurate 3D point cloud by using of structure from motion techniques. In addition, the detailed image data serves as basis for a visual damage detection on the investigated building. This paper focuses on our experience with use of a low-cost light-weight Hokuyo laser scanner onboard an UAV. The hardware components for laser scanner based 3D point cloud acquisition are discussed, problems are demonstrated and analyzed, and a quantitative analysis of the accuracy potential is shown as well as in comparison with structure from motion-tools presented.
NASA Astrophysics Data System (ADS)
Xu, Jianxin; Liang, Hong
2013-07-01
Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.
3d-3d correspondence revisited
NASA Astrophysics Data System (ADS)
Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr
2016-04-01
In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.
A New Stochastic Modeling of 3-D Mud Drapes Inside Point Bar Sands in Meandering River Deposits
Yin, Yanshu
2013-12-15
The environment of major sediments of eastern China oilfields is a meandering river where mud drapes inside point bar sand occur and are recognized as important factors for underground fluid flow and distribution of the remaining oil. The present detailed architectural analysis, and the related mud drapes' modeling inside a point bar, is practical work to enhance oil recovery. This paper illustrates a new stochastic modeling of mud drapes inside point bars. The method is a hierarchical strategy and composed of three nested steps. Firstly, the model of meandering channel bodies is established using the Fluvsim method. Each channel centerline obtained from the Fluvsim is preserved for the next simulation. Secondly, the curvature ratios of each meandering river at various positions are calculated to determine the occurrence of each point bar. The abandoned channel is used to characterize the geometry of each defined point bar. Finally, mud drapes inside each point bar are predicted through random sampling of various parameters, such as number, horizontal intervals, dip angle, and extended distance of mud drapes. A dataset, collected from a reservoir in the Shengli oilfield of China, was used to illustrate the mud drapes' building procedure proposed in this paper. The results show that the inner architectural elements of the meandering river are depicted fairly well in the model. More importantly, the high prediction precision from the cross validation of five drilled wells shows the practical value and significance of the proposed method.
Lee, Larissa J.; Sadow, Cheryl A.; Russell, Anthony; Viswanathan, Akila N.
2009-11-01
Purpose: To compare high dose rate (HDR) point B to pelvic lymph node dose using three-dimensional-planned brachytherapy for cervical cancer. Methods and Materials: Patients with FIGO Stage IB-IIIB cervical cancer received 70 tandem HDR applications using CT-based treatment planning. The obturator, external, and internal iliac lymph nodes (LN) were contoured. Per fraction (PF) and combined fraction (CF) right (R), left (L), and bilateral (Bil) nodal doses were analyzed. Point B dose was compared with LN dose-volume histogram (DVH) parameters by paired t test and Pearson correlation coefficients. Results: Mean PF and CF doses to point B were R 1.40 Gy +- 0.14 (CF: 7 Gy), L 1.43 +- 0.15 (CF: 7.15 Gy), and Bil 1.41 +- 0.15 (CF: 7.05 Gy). The correlation coefficients between point B and the D100, D90, D50, D2cc, D1cc, and D0.1cc LN were all less than 0.7. Only the D2cc to the obturator and the D0.1cc to the external iliac nodes were not significantly different from the point B dose. Significant differences between R and L nodal DVHs were seen, likely related to tandem deviation from irregular tumor anatomy. Conclusions: With HDR brachytherapy for cervical cancer, per fraction nodal dose approximates a dose equivalent to teletherapy. Point B is a poor surrogate for dose to specific nodal groups. Three-dimensional defined nodal contours during brachytherapy provide a more accurate reflection of delivered dose and should be part of comprehensive planning of the total dose to the pelvic nodes, particularly when there is evidence of pathologic involvement.
NASA Astrophysics Data System (ADS)
Torabi, M.; Mousavi G., S. M.; Younesian, D.
2015-03-01
Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.
Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y
2014-06-01
Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.
NASA Astrophysics Data System (ADS)
Aghaei, O.; Nedimovic, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.
2010-12-01
We present stack and migrated stack volumes of a fast-spreading center produced from the high-resolution 3D multichannel seismic (MCS) data collected in summer of 2008 over the East Pacific Rise (EPR) at 9°50’N during cruise MGL0812. These volumes give us new insight into the 3D structure of the lower crust and Moho Transition Zone (MTZ) along and across the ridge axis, and how this structure relates to the ridge segmentation at the spreading axis. The area of 3D coverage is between 9°38’N and 9°58’N (~1000 km2) where the documented eruptions of 1990-91 and 2005-06 occurred. This high-resolution survey has a nominal bin size of 6.25 m in cross-axis direction and 37.5 m in along-axis direction. The prestack processing sequence applied to data includes 1D and 2D filtering to remove low-frequency cable noise, offset-dependent spherical divergence correction to compensate for geometrical spreading, surface-consistent amplitude correction to balance abnormally high/low shot and channel amplitudes, trace editing, velocity analysis, normal moveout (NMO), and CMP mute of stretched far offset arrivals. The poststack processing includes seafloor multiple mute to reduce migration noise and poststack time migration. We also will apply primary multiple removal and prestack time migration to the data and compare the results to the migrated stack volume. The poststack and prestack migrated volumes will then be used to detail Moho seismic signature variations and their relationship to ridge segmentation, crustal age, bathymetry, and magmatism. We anticipate that the results will also provide insight into the mantle upwelling pattern, which is actively debated for the study area.
Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun
2015-01-01
Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133
Dynamic topology and flux rope evolution during non-linear tearing of 3D null point current sheets
Wyper, P. F. Pontin, D. I.
2014-10-15
In this work, the dynamic magnetic field within a tearing-unstable three-dimensional current sheet about a magnetic null point is described in detail. We focus on the evolution of the magnetic null points and flux ropes that are formed during the tearing process. Generally, we find that both magnetic structures are created prolifically within the layer and are non-trivially related. We examine how nulls are created and annihilated during bifurcation processes, and describe how they evolve within the current layer. The type of null bifurcation first observed is associated with the formation of pairs of flux ropes within the current layer. We also find that new nulls form within these flux ropes, both following internal reconnection and as adjacent flux ropes interact. The flux ropes exhibit a complex evolution, driven by a combination of ideal kinking and their interaction with the outflow jets from the main layer. The finite size of the unstable layer also allows us to consider the wider effects of flux rope generation. We find that the unstable current layer acts as a source of torsional magnetohydrodynamic waves and dynamic braiding of magnetic fields. The implications of these results to several areas of heliophysics are discussed.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
NASA Astrophysics Data System (ADS)
Vianna Baptista, M. L.
2013-07-01
Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.
NASA Astrophysics Data System (ADS)
Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert
2015-04-01
Shell beds are key features in sedimentary records throughout the Phanerozoic. The interplay between burial rates and population productivity is reflected in distinct degrees of shelliness. Consequently, shell beds may provide informations on various physical processes, which led to the accumulation and preservation of hard parts. Many shell beds pass through a complex history of formation being shaped by more than one factor. In shallow marine settings, the composition of shell beds is often strongly influenced by winnowing, reworking and transport. These processes may cause considerable time averaging and the accumulation of specimens, which have lived thousands of years apart. In the best case, the environment remained stable during that time span and the mixing does not mask the overall composition. A major obstacle for the interpretation of shell beds, however, is the amalgamation of shell beds of several depositional units in a single concentration, as typically for tempestites and tsunamites. Disentangling such mixed assemblages requires deep understanding of the ecological requirements of the taxa involved - which is achievable for geologically young shell beds with living relatives - and a statistic approach to quantify the contribution by the various death assemblages. Furthermore it requires understanding of sedimentary processes potentially involved into their formation. Here we present the first attempt to describe and decipher such a multi-phase shell-bed based on a high resolution digital surface model (1 mm) combined with ortho-photos with a resolution of 0.5 mm per pixel. Documenting the oyster reef requires precisely georeferenced data; owing to high redundancy of the point cloud an accuracy of a few mm was achieved. The shell accumulation covers an area of 400 m2 with thousands of specimens, which were excavated by a three months campaign at Stetten in Lower Austria. Formed in an Early Miocene estuary of the Paratethys Sea it is mainly composed
NASA Astrophysics Data System (ADS)
Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran
2016-03-01
We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.
Fu, Hongjun; Hussaini, S. Abid; Wegmann, Susanne; Profaci, Caterina; Daniels, Jacob D.; Herman, Mathieu; Emrani, Sheina; Figueroa, Helen Y.; Hyman, Bradley T.; Davies, Peter; Duff, Karen E.
2016-01-01
3D volume imaging using iDISCO+ was applied to observe the spatial and temporal progression of tau pathology in deep structures of the brain of a mouse model that recapitulates the earliest stages of Alzheimer’s disease (AD). Tau pathology was compared at four timepoints, up to 34 months as it spread through the hippocampal formation and out into the neocortex along an anatomically connected route. Tau pathology was associated with significant gliosis. No evidence for uptake and accumulation of tau by glia was observed. Neuronal cells did appear to have internalized tau, including in extrahippocampal areas as a small proportion of cells that had accumulated human tau protein did not express detectible levels of human tau mRNA. At the oldest timepoint, mature tau pathology in the entorhinal cortex (EC) was associated with significant cell loss. As in human AD, mature tau pathology in the EC and the presence of tau pathology in the neocortex correlated with cognitive impairment. 3D volume imaging is an ideal technique to easily monitor the spread of pathology over time in models of disease progression. PMID:27466814
Fu, Hongjun; Hussaini, S Abid; Wegmann, Susanne; Profaci, Caterina; Daniels, Jacob D; Herman, Mathieu; Emrani, Sheina; Figueroa, Helen Y; Hyman, Bradley T; Davies, Peter; Duff, Karen E
2016-01-01
3D volume imaging using iDISCO+ was applied to observe the spatial and temporal progression of tau pathology in deep structures of the brain of a mouse model that recapitulates the earliest stages of Alzheimer's disease (AD). Tau pathology was compared at four timepoints, up to 34 months as it spread through the hippocampal formation and out into the neocortex along an anatomically connected route. Tau pathology was associated with significant gliosis. No evidence for uptake and accumulation of tau by glia was observed. Neuronal cells did appear to have internalized tau, including in extrahippocampal areas as a small proportion of cells that had accumulated human tau protein did not express detectible levels of human tau mRNA. At the oldest timepoint, mature tau pathology in the entorhinal cortex (EC) was associated with significant cell loss. As in human AD, mature tau pathology in the EC and the presence of tau pathology in the neocortex correlated with cognitive impairment. 3D volume imaging is an ideal technique to easily monitor the spread of pathology over time in models of disease progression. PMID:27466814
NASA Technical Reports Server (NTRS)
Gelfand, J.; Cochran, W. D.; Smith, W. H.
1977-01-01
We present the results of an analysis of the effects of atmospheric seeing and of instrumental spectral and spatial resolution on the observed variation of absorption-line profiles across the disk of Jupiter. The technique described may be applied equally well to the analysis of observations of any extended astronomical source. These results show the necessity of obtaining accurate point-spread-function information during the course of observations of this nature. We also point out that in order to avoid the uncertainties and ambiguities inherent in attempts at deconvolution of observational data, one must properly convolve the appropriate spatial and spectral resolution functions with the models being tested and then compare the results with the observational data.
Effects of point-spread function on calibration and radiometric accuracy of CCD camera.
Du, Hong; Voss, Kenneth J
2004-01-20
The point-spread function (PSF) of a camera can seriously affect the accuracy of radiometric calibration and measurement. We found that the PSF can produce a 3.7% difference between the apparent measured radiance of two plaques of different sizes with the same illumination. This difference can be removed by deconvolution with the measured PSF. To determine the PSF, many images of a collimated beam from a He-Ne laser are averaged. Since our optical system is focused at infinity, it should focus this source to a single pixel. Although the measured PSF is very sharp, dropping 4 and 6 orders of magnitude and 8 and 100 pixels away from the point source, respectively, we show that the effect of the PSF as far as 100 pixels away cannot be ignored without introducing an appreciable error to the calibration. We believe that the PSF should be taken into account in all optical systems to obtain accurate radiometric measurements. PMID:14765928
Point spread function modeling method for x-ray flat panel detector imaging
NASA Astrophysics Data System (ADS)
Zhang, Hua; Shi, Yikai; Huang, Kuidong; Yu, Qingchao
2012-10-01
Flat panel detector (FPD) has been widely used as the imaging unit in the current X-ray digital radiography (DR) systems and Computed Tomography (CT) systems. Point spread function (PSF) is an important indicator of the FPD imaging system, but also the basis for image restoration. For the problem of poor accuracy of the FPD's PSF measurement with the original pinhole imaging for DR systems, a new PSF measuring method with the pinhole imaging based on the image restoration is proposed in this paper. Firstly, some images collected with the pinhole imaging are averaged to one image to reducing the noise. Then, the original pinhole image is calculated according to the energy conservation principle of point spread. Finally, the PSF of the FPD is obtained using the operation of image restoration. On this basis, through the fitting of the characteristic parameters of the PSF on different scan conditions, the computational model of the PSF is established for any scan conditions. Experimental results show that the method can obtain a more accurate PSF of the FPD, and the PSF of the same system under any scan conditions can be directly calculated with the PSF model.
Comparison and validation of point spread models for imaging in natural waters.
Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A
2008-06-23
It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images. PMID:18575566
The Effects of Instrumental Elliptical Polarization on Stellar Point Spread Function Fine Structure
NASA Technical Reports Server (NTRS)
Carson, Joseph C.; Kern, Brian D.; Breckinridge, James B.; Trauger, John T.
2005-01-01
We present procedures and preliminary results from a study on the effects of instrumental polarization on the fine structure of the stellar point spread function (PSF). These effects are important to understand because the the aberration caused by instrumental polarization on an otherwise diffraction-limited will likely have have severe consequences for extreme high contrast imaging systems such as NASA's planned Terrestrial Planet Finder (TPF) mission and the proposed NASA Eclipse mission. The report here, describing our efforts to examine these effects, includes two parts: 1) a numerical analysis of the effect of metallic reflection, with some polarization-specific retardation, on a spherical wavefront; 2) an experimental approach for observing this effect, along with some preliminary laboratory results. While the experimental phase of this study requires more fine-tuning to produce meaningful results, the numerical analysis indicates that the inclusion of polarization-specific phase effects (retardation) results in a point spread function (PSF) aberration more severe than the amplitude (reflectivity) effects previously recorded in the literature.
A point pattern model of the spread of foot-and-mouth disease.
Gerbier, G; Bacro, J N; Pouillot, R; Durand, B; Moutou, F; Chadoeuf, J
2002-11-29
The spatial spread of foot-and-mouth disease (FMD) is influenced by several sources of spatial heterogeneity: heterogeneity of the exposure to the virus, heterogeneity of the animal density and heterogeneity of the networks formed by the contacts between farms. A discrete space model assuming that farms can be reduced to points is proposed to handle these different factors. The farm-to-farm process of transmission of the infection is studied using point-pattern methodology. Farm management, commercial exchanges, possible airborne transmission, etc. cannot be explicitly taken into account because of lack of data. These latter factors are introduced via surrogate variables such as herd size and distance between farms. The model is built on the calculation of an infectious potential for each farm. This method has been applied to the study of the 1967-1968 FMD epidemic in UK and allowed us to evaluate the spatial variation of the probability of infection during this epidemic. Maximum likelihood estimation has been conducted conditional on the absence of data concerning the farms which were not infected during the epidemic. Model parameters have then been tested using an approximated conditional-likelihood ratio test. In this case study, results and validation are limited by the lack of data, but this model can easily be extended to include other information such as the effect of wind direction and velocity on airborne spread of the virus or the complex interactions between the locations of farms and the herd size. It can also be applied to other diseases where point approximation is convenient. In the context of an increase of animal density in some areas, the model explicitly incorporates the density and known epidemiological characteristics (e.g. incubation period) in the calculation of the probability of FMD infection. Control measures such as vaccination or slaughter can be simply introduced, respectively, as a reduction of the susceptible population or as a
Updated point spread function simulations for JWST with WebbPSF
NASA Astrophysics Data System (ADS)
Perrin, Marshall D.; Sivaramakrishnan, Anand; Lajoie, Charles-Philippe; Elliott, Erin; Pueyo, Laurent; Ravindranath, Swara; Albert, Loïc.
2014-08-01
Accurate models of optical performance are an essential tool for astronomers, both for planning scientific observations ahead of time, and for a wide range of data analysis tasks such as point-spread-function (PSF)-fitting photometry and astrometry, deconvolution, and PSF subtraction. For the James Webb Space Telescope, the WebbPSF program provides a PSF simulation tool in a flexible and easy-to-use software package available to the community and implemented in Python. The latest version of WebbPSF adds new support for spectroscopic modes of JWST NIRISS, MIRI, and NIRSpec, including modeling of slit losses and diffractive line spread functions. It also provides additional options for modeling instrument defocus and/or pupil misalignments. The software infrastructure of WebbPSF has received enhancements including improved parallelization, an updated graphical interface, a better configuration system, and improved documentation. We also present several comparisons of WebbPSF simulated PSFs to observed PSFs obtained using JWST's flight science instruments during recent cryovac tests. Excellent agreement to first order is achieved for all imaging modes cross-checked thus far, including tests for NIRCam, FGS, NIRISS, and MIRI. These tests demonstrate that WebbPSF model PSFs have good fidelity to the key properties of JWST's as-built science instruments.
NASA Astrophysics Data System (ADS)
Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich
2000-04-01
Individual region-of-interest atlas extraction consists of two main parts: T1-weighted MRI grayscale images are classified into brain tissues types (gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB), background (BG)), followed by class image analysis to define automatically meaningful ROIs (e.g., cerebellum, cerebral lobes, etc.). The purpose of this algorithm is the automatic detection of training points for neural network-based classification of brain tissue types. One transaxial slice of the patient data set is analyzed. Background separation is done by simple region growing. A random generator extracts spatially uniformly distributed training points of class BG from that region. For WM training point extraction (TPE), the homogeneity operator is the most important. The most homogeneous voxels define the region for WM TPE. They are extracted by analyzing the cumulative histogram of the homogeneity operator response. Assuming a Gaussian gray value distribution in WM, a random number is used as a probabilistic threshold for TPE. Similarly, non-white matter and non-background regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is an additional feature. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated.
NASA Astrophysics Data System (ADS)
Nedimovic, M. R.; Aghaei, O.; Carbotte, S. M.; Carton, H. D.; Canales, J. P.
2014-12-01
We measured crustal thickness and mapped Moho transition zone (MTZ) character over an 880 km2 section of the fast-spreading East Pacific Rise (EPR) using the first full 3D multichannel seismic (MCS) dataset collected across a mid-ocean ridge (MOR). The 9°42'-9°57'N area was initially investigated using 3D poststack time migration, which was followed by application of 3D prestack time migration (PSTM) to the whole dataset. This first attempt at applying 3D PSTM to MCS data from a MOR environment resulted in the most detailed reflection images of a spreading center to date. MTZ reflections are for the first time imaged below the ridge axis away from axial discontinuities indicating that Moho is formed at zero age at least at some sections of the MOR system. The average crustal thickness and crustal velocity derived from PSTM are 5920±320 m and 6320±290 m/s, respectively. The average crustal thickness varies little from Pacific to Cocos plate suggesting mostly uniform crustal production in the last ~180 Ka. However, the crust thins by ~400 m from south to north. The MTZ reflections were imaged within ~92% of the study area, with ~66% of the total characterized by impulsive reflections interpreted to originate from a thin MTZ and 26% characterized by diffusive reflections interpreted to originate from a thick MTZ. The MTZ is dominantly diffusive at the southern (9°37.5'-9°40'N) and northern (9°51'-9°57'N) ends of the study area, and it is impulsive in the central region (9°42'-9°51'N). No data were collected between 9°40'N and 9°42'N. More efficient mantle melt extraction is inferred within the central region with greater proportion of the lower crust accreted from the axial magma lens than within the northern and southern sections. This along-axis variation in the crustal accretion style may be caused by interaction between the melt sources for the ridge and the local seamounts, which are present within the northern and southern survey sections. Third
NASA Astrophysics Data System (ADS)
Udphuay, S.; Everett, M. E.; Guenther, T.; Warden, R. R.
2007-12-01
The D-Day invasion site at Pointe du Hoc in Normandy, France is one of the most important World War II battlefields. The site remains today a valuable historic cultural resource. However the site is vulnerable to cliff collapses that could endanger the observation post building and U.S. Ranger memorial located just landward of the sea stack, and an anti-aircraft gun emplacement, Col. Rudder's command post, located on the cliff edge about 200 m east of the observation post. A 3-D resistivity tomography incorporating extreme topography is used in this study to provide a detailed site stability assessment with special attention to these two buildings. Multi-electrode resistivity measurements were made across the cliff face and along the top of the cliff around the two at-risk buildings to map major subsurface fracture zones and void spaces that could indicate possible accumulations and pathways of groundwater. The ingress of acidic groundwater through the underlying carbonate formations enlarges pre-existing tectonic fractures via limestone dissolution and weakens the overall structural integrity of the cliff. The achieved 3-D resistivity tomograms provide diagnostic subsurface resistivity distributions. Resistive zones associated with subsurface void spaces have been located. These void spaces constitute a stability geohazard as they become significant drainage routes during and after periods of heavy rainfalls.
Weddell, Stephen J; Lambert, Andrew J
2014-12-10
Precise measurement of aberrations within an optical system is essential to mitigate combined effects of user-generated aberrations for the study of anisoplanatic imaging using optical test benches. The optical system point spread function (PSF) is first defined, and methods to minimize the effects of the optical system are discussed. User-derived aberrations, in the form of low-order Zernike ensembles, are introduced using a liquid crystal spatial light modulator (LC-SLM), and dynamic phase maps are used to study the spatiotemporal PSF. A versatile optical test bench is described, where the Shack Hartmann and curvature wavefront sensors are used to emulate the effects of wavefront propagation over time from two independent sources. PMID:25608061
Determination of caustic surfaces using point spread function and ray Jacobian and Hessian matrices.
Lin, Psang Dain
2014-09-10
Existing methods for determining caustic surfaces involve computing either the flux density singularity or the center of curvature of the wavefront. However, such methods rely rather heavily on ray tracing and finite difference methods for estimating the first- and second-order derivative matrices (i.e., Jacobian and Hessian matrices) of a ray. The main reason is that previously the analytical expressions of these two matrices have been tedious or even impossible. Accordingly, the present study proposes a robust numerical method for determining caustic surfaces based on a point spread function and the established analytical Jacobian and Hessian matrices of a ray by our group. It is shown that the proposed method provides a convenient and computationally straightforward means of determining the caustic surfaces of both simple and complex optical systems without the need for analytical equations, and is substantially different from the two existing methods. PMID:25321667
Point spread function reconstruction from Woofer-Tweeter adaptive optics bench
NASA Astrophysics Data System (ADS)
Keskin, Onur; Conan, Rodolphe; Bradley, Colin
2006-06-01
This paper describes a model-based and experimental evaluation of a point spread function (PSF) reconstruction technique for a Dual Deformable Mirror (DM) Woofer-Tweeter (W/T) Adaptive Optics (AO) system. In the W/T architecture, the Woofer is a low-order-high-stroke DM, and it is used to compensate for the low-frequency-high-amplitude effects introduced by the atmospheric turbulence. The Tweeter is a high-order-low-stroke DM that is used to compensate for the high-frequency-low-amplitude effects introduced by the atmospheric turbulence. The research concept of having Dual DMs allows the W/T AO system to have a high degree of correction of large amplitude wavefront distortion. The role of the UVic AO bench is to demonstrate the closed-loop wavefront control feasibility for a W/T AO concept to be used on the science instruments of the Thirty Meter Telescope (TMT).
NASA Astrophysics Data System (ADS)
Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip
2015-01-01
Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.
Digal, Sanatan; Ray, Rajarshi; Saumia, P S; Srivastava, Ajit M
2013-10-01
We analyze the dynamics of dark brushes connecting point vortices of strength ±1 formed in the isotropic-nematic phase transition of a thin layer of nematic liquid crystals, using a crossed polarizer set up. The evolution of the brushes is seen to be remarkably similar to the evolution of line defects in a three-dimensional nematic liquid crystal system. Even phenomena like the intercommutativity of strings are routinely observed in the dynamics of brushes. We test the hypothesis of a duality between the two systems by determining exponents for the coarsening of total brush length with time as well as shrinking of the size of an isolated loop. Our results show scaling behavior for the brush length as well as the loop size with corresponding exponents in good agreement with the 3D case of string defects. PMID:24026004