Sample records for high dimensional accuracy

  1. Positional and Dimensional Accuracy Assessment of Drone Images Geo-referenced with Three Different GPSs

    NASA Astrophysics Data System (ADS)

    Cao, C.; Lee, X.; Xu, J.

    2017-12-01

    Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.

  2. Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri

    2018-07-01

    This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.

  3. Individual Patient Diagnosis of AD and FTD via High-Dimensional Pattern Classification of MRI

    PubMed Central

    Davatzikos, C.; Resnick, S. M.; Wu, X.; Parmpi, P.; Clark, C. M.

    2008-01-01

    The purpose of this study is to determine the diagnostic accuracy of MRI-based high-dimensional pattern classification in differentiating between patients with Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), and healthy controls, on an individual patient basis. MRI scans of 37 patients with AD and 37 age-matched cognitively normal elderly individuals, as well as 12 patients with FTD and 12 age-matched cognitively normal elderly individuals, were analyzed using voxel-based analysis and high-dimensional pattern classification. Diagnostic sensitivity and specificity of spatial patterns of regional brain atrophy found to be characteristic of AD and FTD were determined via cross-validation and via split-sample methods. Complex spatial patterns of relatively reduced brain volumes were identified, including temporal, orbitofrontal, parietal and cingulate regions, which were predominantly characteristic of either AD or FTD. These patterns provided 100% diagnostic accuracy, when used to separate AD or FTD from healthy controls. The ability to correctly distinguish AD from FTD averaged 84.3%. All estimates of diagnostic accuracy were determined via cross-validation. In conclusion, AD- and FTD-specific patterns of brain atrophy can be detected with high accuracy using high-dimensional pattern classification of MRI scans obtained in a typical clinical setting. PMID:18474436

  4. Investigation on Selective Laser Melting AlSi10Mg Cellular Lattice Strut: Molten Pool Morphology, Surface Roughness and Dimensional Accuracy

    PubMed Central

    Han, Xuesong; Zhu, Haihong; Nie, Xiaojia; Wang, Guoqing; Zeng, Xiaoyan

    2018-01-01

    AlSi10Mg inclined struts with angle of 45° were fabricated by selective laser melting (SLM) using different scanning speed and hatch spacing to gain insight into the evolution of the molten pool morphology, surface roughness, and dimensional accuracy. The results show that the average width and depth of the molten pool, the lower surface roughness and dimensional deviation decrease with the increase of scanning speed and hatch spacing. The upper surface roughness is found to be almost constant under different processing parameters. The width and depth of the molten pool on powder-supported zone are larger than that of the molten pool on the solid-supported zone, while the width changes more significantly than that of depth. However, if the scanning speed is high enough, the width and depth of the molten pool and the lower surface roughness almost keep constant as the density is still high. Therefore, high dimensional accuracy and density as well as good surface quality can be achieved simultaneously by using high scanning speed during SLMed cellular lattice strut. PMID:29518900

  5. High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method

    NASA Astrophysics Data System (ADS)

    Hyun, Jae-Sang; Li, Beiwen; Zhang, Song

    2017-07-01

    This paper presents our research findings on high-speed high-accuracy three-dimensional shape measurement using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of computer-generated 8-bit sinusoidal patterns (a.k.a., the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: a commercially available inexpensive projector and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.

  6. Good Practices for Learning to Recognize Actions Using FV and VLAD.

    PubMed

    Wu, Jianxin; Zhang, Yu; Lin, Weiyao

    2016-12-01

    High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.

  7. The Accuracy of Shock Capturing in Two Spatial Dimensions

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Casper, Jay H.

    1997-01-01

    An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.

  8. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor

    PubMed Central

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-01-01

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194

  9. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  10. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  11. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    PubMed

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  12. Significant improvement in one-dimensional cursor control using Laplacian electroencephalography over electroencephalography

    NASA Astrophysics Data System (ADS)

    Boudria, Yacine; Feltane, Amal; Besio, Walter

    2014-06-01

    Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.

  13. A Multicriteria Approach to Find Predictive and Sparse Models with Stable Feature Selection for High-Dimensional Data.

    PubMed

    Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel

    2017-01-01

    Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.

  14. Dimensional changes of acrylic resin denture bases: conventional versus injection-molding technique.

    PubMed

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-07-01

    Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding.

  15. Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique

    PubMed Central

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-01-01

    Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050

  16. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    PubMed

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  17. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  18. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    NASA Astrophysics Data System (ADS)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  19. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  20. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  1. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  2. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan

    2012-10-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.

  3. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  4. Investigation of Portevin-Le Chatelier effect in 5456 Al-based alloy using digital image correlation

    NASA Astrophysics Data System (ADS)

    Cheng, Teng; Xu, Xiaohai; Cai, Yulong; Fu, Shihua; Gao, Yue; Su, Yong; Zhang, Yong; Zhang, Qingchuan

    2015-02-01

    A variety of experimental methods have been proposed for Portevin-Le Chatelier (PLC) effect. They mainly focused on the in-plane deformation. In order to achieve the high-accuracy measurement, three-dimensional digital image correlation (3D-DIC) was employed in this work to investigate the PLC effect in 5456 Al-based alloy. The temporal and spatial evolutions of deformation in the full field of specimen surface were observed. The large deformation of localized necking was determined experimentally. The distributions of out-of-plane displacement over the loading procedure were also obtained. Furthermore, a comparison of measurement accuracy between two-dimensional digital image correlation (2D-DIC) and 3D-DIC was also performed. Due to the theoretical restriction, the measurement accuracy of 2D-DIC decreases with the increase of deformation. A maximum discrepancy of about 20% with 3D-DIC was observed in this work. Therefore, 3D-DIC is actually more essential for the high-accuracy investigation of PLC effect.

  5. Derivation of an artificial gene to improve classification accuracy upon gene selection.

    PubMed

    Seo, Minseok; Oh, Sejong

    2012-02-01

    Classification analysis has been developed continuously since 1936. This research field has advanced as a result of development of classifiers such as KNN, ANN, and SVM, as well as through data preprocessing areas. Feature (gene) selection is required for very high dimensional data such as microarray before classification work. The goal of feature selection is to choose a subset of informative features that reduces processing time and provides higher classification accuracy. In this study, we devised a method of artificial gene making (AGM) for microarray data to improve classification accuracy. Our artificial gene was derived from a whole microarray dataset, and combined with a result of gene selection for classification analysis. We experimentally confirmed a clear improvement of classification accuracy after inserting artificial gene. Our artificial gene worked well for popular feature (gene) selection algorithms and classifiers. The proposed approach can be applied to any type of high dimensional dataset. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Uniform high order spectral methods for one and two dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Shu, Chi-Wang

    1991-01-01

    Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.

  7. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  8. The use of a 3D laser scanner using superimpositional software to assess the accuracy of impression techniques.

    PubMed

    Shah, Sinal; Sundaram, Geeta; Bartlett, David; Sherriff, Martyn

    2004-11-01

    Several studies have made comparisons in the dimensional accuracy of different elastomeric impression materials. Most have used two-dimensional measuring devices, which neglect to account for the dimensional changes that exist along a three-dimensional surface. The aim of this study was to compare the dimensional accuracy of an impression technique using a polyether material (Impregum) and a vinyl poly siloxane material (President) using a laser scanner with three-dimensional superimpositional software. Twenty impressions, 10 with a polyether and 10 with addition silicone, of a stone master model that resembled a dental arch containing three acrylic posterior teeth were cast in orthodontic stone. One plastic tooth was prepared for a metal crown. The master model and the casts were digitised with the non-contacting laser scanner to produce a 3D image. 3D surface viewer software superimposed the master model to the stone replica and the difference between the images analysed. The mean difference between the model and the stone replica made from Impregum was 0.072mm (SD 0.006) and that for the silicone 0.097mm (SD 0.005) and this difference was statistically significantly, p=0.001. Both impression materials provided an accurate replica of the prepared teeth supporting the view that these materials are highly accurate.

  9. Dimensional Accuracy of Hydrophilic and Hydrophobic VPS Impression Materials Using Different Impression Techniques - An Invitro Study

    PubMed Central

    Pilla, Ajai; Pathipaka, Suman

    2016-01-01

    Introduction The dimensional stability of the impression material could have an influence on the accuracy of the final restoration. Vinyl Polysiloxane Impression materials (VPS) are most frequently used as the impression material in fixed prosthodontics. As VPS is hydrophobic when it is poured with gypsum products, manufacturers added intrinsic surfactants and marketed as hydrophilic VPS. These hydrophilic VPS have shown increased wettability with gypsum slurries. VPS are available in different viscosities ranging from very low to very high for usage under different impression techniques. Aim To compare the dimensional accuracy of hydrophilic VPS and hydrophobic VPS using monophase, one step and two step putty wash impression techniques. Materials and Methods To test the dimensional accuracy of the impression materials a stainless steel die was fabricated as prescribed by ADA specification no. 19 for elastomeric impression materials. A total of 60 impressions were made. The materials were divided into two groups, Group1 hydrophilic VPS (Aquasil) and Group 2 hydrophobic VPS (Variotime). These were further divided into three subgroups A, B, C for monophase, one-step and two-step putty wash technique with 10 samples in each subgroup. The dimensional accuracy of the impressions was evaluated after 24 hours using vertical profile projector with lens magnification range of 20X-125X illumination. The study was analyzed through one-way ANOVA, post-hoc Tukey HSD test and unpaired t-test for mean comparison between groups. Results Results showed that the three different impression techniques (monophase, 1-step, 2-step putty wash techniques) did cause significant change in dimensional accuracy between hydrophilic VPS and hydrophobic VPS impression materials. One-way ANOVA disclosed, mean dimensional change and SD for hydrophilic VPS varied between 0.56% and 0.16%, which were low, suggesting hydrophilic VPS was satisfactory with all three impression techniques. However, mean dimensional change and SD for hydrophobic VPS were much higher with monophase, mere increase for 1-step and 2-step, than the standard steel die (p<0.05). Unpaired t-test displayed that hydrophilic VPS judged satisfactory compared to hydrophobic VPS among 1-step and 2-step impression technique. Conclusion Within the limitations of this study, it can be concluded that hydrophilic Vinyl polysiloxane was more dimensionally accurate than hydrophobic Vinyl polysiloxane using monophase, one step and two step putty wash impression techniques under moist conditions. PMID:27042587

  10. Interior radiances in optically deep absorbing media. I - Exact solutions for one-dimensional model.

    NASA Technical Reports Server (NTRS)

    Kattawar, G. W.; Plass, G. N.

    1973-01-01

    An exact analytic solution to the one-dimensional scattering problem with arbitrary single scattering albedo and arbitrary surface albedo is presented. Expressions are given for the emergent flux from a homogeneous layer, the internal flux within the layer, and the radiative heating. A comparison of these results with the values calculated from the matrix operator theory indicates an exceedingly high accuracy. A detailed study is made of the error in the matrix operator results and its dependence on the accuracy of the starting value.

  11. Using learning automata to determine proper subset size in high-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz

    2017-03-01

    In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.

  12. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    PubMed

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  13. Surface detail reproduction and dimensional accuracy of molds: influence of disinfectant solutions and elastomeric impression materials.

    PubMed

    Guiraldo, Ricardo D; Berger, Sandrine B; Siqueira, Ronaldo Mt; Grandi, Victor H; Lopes, Murilo B; Gonini-Júnior, Alcides; Caixeta, Rodrigo V; de Carvalho, Rodrigo V; Sinhoreti, Mário Ac

    2017-04-01

    This study compared the surface detail reproduction and dimensional accuracy of molds after disinfection using 2% sodium hypochlorite, 2% chlorhexidine digluconate or 0.2% peracetic acid to those of molds that were not disinfected, for four elastomeric impression materials: polysulfide (Light Bodied Permlastic), polyether (Impregum Soft), polydimethylsiloxane (Oranwash L) andpolyvinylsiloxane (Aquasil Ultra LV). The molds were prepared on a matrix by applying pressure, using a perforated metal tray. The molds were removed following polymerization and either disinfected (by soaking in one of the solutions for 15 minutes) or not disinfected. The samples were thus divided into 16 groups (n=5). Surface detail reproduction and dimensional accuracy were evaluated using optical microscopy to assess the 20-μm line over its entire 25 mm length. The dimensional accuracy results (%) were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test (a=5%). The 20-μm line was completely reproduced by all elastomeric impression materials, regardless of disinfection procedure. There was no significant difference between the control group and molds disinfected with peracetic acid for the elastomeric materials Impregum Soft (polyether) and Aquasil Ultra LV (polyvinylsiloxane). The high-level disinfectant peracetic acid would be the choice material for disinfection. Sociedad Argentina de Investigación Odontológica.

  14. High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

  15. A novel quantitative analysis method of three-dimensional fluorescence spectra for vegetable oils contents in edible blend oil

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.

  16. Accuracy of patient-specific guided glenoid baseplate positioning for reverse shoulder arthroplasty.

    PubMed

    Levy, Jonathan C; Everding, Nathan G; Frankle, Mark A; Keppler, Louis J

    2014-10-01

    The accuracy of reproducing a surgical plan during shoulder arthroplasty is improved by computer assistance. Intraoperative navigation, however, is challenged by increased surgical time and additional technically difficult steps. Patient-matched instrumentation has the potential to reproduce a similar degree of accuracy without the need for additional surgical steps. The purpose of this study was to examine the accuracy of patient-specific planning and a patient-specific drill guide for glenoid baseplate placement in reverse shoulder arthroplasty. A patient-specific glenoid baseplate drill guide for reverse shoulder arthroplasty was produced for 14 cadaveric shoulders based on a plan developed by a virtual preoperative 3-dimensional planning system using thin-cut computed tomography images. Using this patient-specific guide, high-volume shoulder surgeons exposed the glenoid through a deltopectoral approach and drilled the bicortical pathway defined by the guide. The trajectory of the drill path was compared with the virtual preoperative planned position using similar thin-cut computed tomography images to define accuracy. The drill pathway defined by the patient-matched guide was found to be highly accurate when compared with the preoperative surgical plan. The translational accuracy was 1.2 ± 0.7 mm. The accuracy of inferior tilt was 1.2° ± 1.2°. The accuracy of glenoid version was 2.6° ± 1.7°. The use of patient-specific glenoid baseplate guides is highly accurate in reproducing a virtual 3-dimensional preoperative plan. This technique delivers the accuracy observed using computerized navigation without any additional surgical steps or technical challenges. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  17. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  18. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System.

    PubMed

    Zhang, Jiarui; Zhang, Yingjie; Chen, Bo

    2017-12-20

    The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.

  19. Accuracy assessment of surgical planning and three-dimensional-printed patient-specific guides for orthopaedic osteotomies.

    PubMed

    Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart

    2017-06-01

    This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.

  20. Simulation of springback and microstructural analysis of dual phase steels

    NASA Astrophysics Data System (ADS)

    Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard

    2013-12-01

    With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.

  1. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  2. New method of 2-dimensional metrology using mask contouring

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka

    2008-10-01

    We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.

  3. Physical properties and microstructure study of stainless steel 316L alloy fabricated by selective laser melting

    NASA Astrophysics Data System (ADS)

    Islam, Nurul Kamariah Md Saiful; Harun, Wan Sharuzi Wan; Ghani, Saiful Anwar Che; Omar, Mohd Asnawi; Ramli, Mohd Hazlen; Ismail, Muhammad Hussain

    2017-12-01

    Selective Laser Melting (SLM) demonstrates the 21st century's manufacturing infrastructure in which powdered raw material is melted by a high energy focused laser, and built up layer-by-layer until it forms three-dimensional metal parts. SLM process involves a variation of process parameters which affects the final material properties. 316L stainless steel compacts through the manipulation of building orientation and powder layer thickness parameters were manufactured by SLM. The effect of the manipulated parameters on the relative density and dimensional accuracy of the 316L stainless steel compacts, which were in the as-build condition, were experimented and analysed. The relationship between the microstructures and the physical properties of fabricated 316L stainless steel compacts was investigated in this study. The results revealed that 90° building orientation has higher relative density and dimensional accuracy than 0° building orientation. Building orientation was found to give more significant effect in terms of dimensional accuracy, and relative density of SLM compacts compare to build layer thickness. Nevertheless, the existence of large number and sizes of pores greatly influences the low performances of the density.

  4. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  5. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    NASA Astrophysics Data System (ADS)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  6. Build Angle: Does It Influence the Accuracy of 3D-Printed Dental Restorations Using Digital Light-Processing Technology?

    PubMed

    Osman, Reham B; Alharbi, Nawal; Wismeijer, Daniel

    The aim of this study was to evaluate the effect of the build orientation/build angle on the dimensional accuracy of full-coverage dental restorations manufactured using digital light-processing technology (DLP-AM). A full dental crown was digitally designed and 3D-printed using DLP-AM. Nine build angles were used: 90, 120, 135, 150, 180, 210, 225, 240, and 270 degrees. The specimens were digitally scanned using a high-resolution optical surface scanner (IScan D104i, Imetric). Dimensional accuracy was evaluated using the digital subtraction technique. The 3D digital files of the scanned printed crowns (test model) were exported in standard tessellation language (STL) format and superimposed on the STL file of the designed crown [reference model] using Geomagic Studio 2014 (3D Systems). The root mean square estimate (RMSE) values were evaluated, and the deviation patterns on the color maps were further assessed. The build angle influenced the dimensional accuracy of 3D-printed restorations. The lowest RMSE was recorded for the 135-degree and 210-degree build angles. However, the overall deviation pattern on the color map was more favorable with the 135-degree build angle in contrast with the 210-degree build angle where the deviation was observed around the critical marginal area. Within the limitations of this study, the recommended build angle using the current DLP system was 135 degrees. Among the selected build angles, it offers the highest dimensional accuracy and the most favorable deviation pattern. It also offers a self-supporting crown geometry throughout the building process.

  7. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  8. On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.

    PubMed

    Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing

    2018-03-19

    In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.

  9. Virtual Assessment of Sex: Linear and Angular Traits of the Mandibular Ramus Using Three-Dimensional Computed Tomography.

    PubMed

    Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim

    2016-10-01

    Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.

  10. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  11. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.

    PubMed

    Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  12. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    PubMed Central

    Jiang, Zhuqing; Men, Aidong; Yang, Bo

    2017-01-01

    Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197

  13. Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities

    NASA Astrophysics Data System (ADS)

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.

  14. Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study

    PubMed Central

    Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi

    2013-01-01

    To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710

  15. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  16. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  17. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  18. Silicon Micromachined Dimensional Calibration Artifact for M

    ScienceCinema

    Hy Tran

    2017-12-09

    Improves measurement accuracy for producing miniaturized devices such as fuel injectors, watch components, and inkjet printer parts as these high-volume parts are being manufactured. 2008 R&D 100 w...  

  19. Cryo-EM image alignment based on nonuniform fast Fourier transform.

    PubMed

    Yang, Zhengfan; Penczek, Pawel A

    2008-08-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.

  20. Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform

    PubMed Central

    Yang, Zhengfan; Penczek, Pawel A.

    2008-01-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351

  1. Improved method for predicting protein fold patterns with ensemble classifiers.

    PubMed

    Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C

    2012-01-27

    Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.

  2. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    PubMed

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  3. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  4. Silicon Mircomachined Dimensional Calibration Artifact for M

    ScienceCinema

    Sandia

    2017-12-09

    Improves measurement accuracy for producing miniaturized devices such as fuel injectors, watch components, and inkjet printer parts as these high-volume parts are being manufactured. 2008 R&D 100 winner (SAND2008-2227P)

  5. Optimizing the Use of LiDAR for Hydraulic and Sediment Transport Model Development: Case Studies from Marin and Sonoma Counties, CA

    NASA Astrophysics Data System (ADS)

    Kobor, J. S.; O'Connor, M. D.; Sherwood, M. N.

    2013-12-01

    Effective floodplain management and restoration requires a detailed understanding of floodplain processes not readily achieved using standard one-dimensional hydraulic modeling approaches. The application of more advanced numerical models is, however, often limited by the relatively high costs of acquiring the high-resolution topographic data needed for model development using traditional surveying methods. The increasing availability of LiDAR data has the potential to significantly reduce these costs and thus facilitate application of multi-dimensional hydraulic models where budget constraints would have otherwise prohibited their use. The accuracy and suitability of LiDAR data for supporting model development can vary widely depending on the resolution of channel and floodplain features, the data collection density, and the degree of vegetation canopy interference among other factors. More work is needed to develop guidelines for evaluating LiDAR accuracy and determining when and how best the data can be used to support numerical modeling activities. Here we present two recent case studies where LiDAR datasets were used to support floodplain and sediment transport modeling efforts. One LiDAR dataset was collected with a relatively low point density and used to study a small stream channel in coastal Marin County and a second dataset was collected with a higher point density and applied to a larger stream channel in western Sonoma County. Traditional topographic surveying was performed at both sites which provided a quantitative means of evaluating the LiDAR accuracy. We found that with the lower point density dataset, the accuracy of the LiDAR varied significantly between the active stream channel and floodplain whereas the accuracy across the channel/floodplain interface was more uniform with the higher density dataset. Accuracy also varied widely as a function of the density of the riparian vegetation canopy. We found that coupled 1- and 2-dimensional hydraulic models whereby the active channel is simulated in 1-dimension and the floodplain in 2-dimensions provided the best means of utilizing the LiDAR data to evaluate existing conditions and develop alternative flood hazard mitigation and habitat restoration strategies. Such an approach recognizes the limitations of the LiDAR data within active channel areas with dense riparian cover and is cost-effective in that it allows field survey efforts to focus primarily on characterizing active stream channel areas. The multi-dimensional modeling approach also conforms well to the physical realties of the stream system whereby in-channel flows can generally be well-described as a one-dimensional flow problem and floodplain flows are often characterized by multiple and often poorly understood flowpaths. The multi-dimensional modeling approach has the additional advantages of allowing for accurate simulation of the effects of hydraulic structures using well-tested one-dimensional formulae and minimizing the computational burden of the models by not requiring the small spatial resolutions necessary to resolve the geometries of small stream channels in two-dimensions.

  6. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  7. Compound activity prediction using models of binding pockets or ligand properties in 3D

    PubMed Central

    Kufareva, Irina; Chen, Yu-Chen; Ilatovskiy, Andrey V.; Abagyan, Ruben

    2014-01-01

    Transient interactions of endogenous and exogenous small molecules with flexible binding sites in proteins or macromolecular assemblies play a critical role in all biological processes. Current advances in high-resolution protein structure determination, database development, and docking methodology make it possible to design three-dimensional models for prediction of such interactions with increasing accuracy and specificity. Using the data collected in the Pocketome encyclopedia, we here provide an overview of two types of the three-dimensional ligand activity models, pocket-based and ligand property-based, for two important classes of proteins, nuclear and G-protein coupled receptors. For half the targets, the pocket models discriminate actives from property matched decoys with acceptable accuracy (the area under ROC curve, AUC, exceeding 84%) and for about one fifth of the targets with high accuracy (AUC > 95%). The 3D ligand property field models performed better than 95% in half of the cases. The high performance models can already become a basis of activity predictions for new chemicals. Family-wide benchmarking of the models highlights strengths of both approaches and helps identify their inherent bottlenecks and challenges. PMID:23116466

  8. Micro-assembly of three-dimensional rotary MEMS mirrors

    NASA Astrophysics Data System (ADS)

    Wang, Lidai; Mills, James K.; Cleghorn, William L.

    2009-02-01

    We present a novel approach to construct three-dimensional rotary micro-mirrors, which are fundamental components to build 1×N or N×M optical switching systems. A rotary micro-mirror consists of two microparts: a rotary micro-motor and a micro-mirror. Both of the two microparts are fabricated with PolyMUMPs, a surface micromachining process. A sequential robotic microassembly process is developed to join the two microparts together to construct a threedimensional device. In order to achieve high positioning accuracy and a strong mechanical connection, the micro-mirror is joined to the micro-motor using an adhesive mechanical fastener. The mechanical fastener has self-alignment ability and provides a temporary joint between the two microparts. The adhesive bonding can create a strong permanent connection, which does not require extra supporting plates for the micro-mirror. A hybrid manipulation strategy, which includes pick-and-place and pushing-based manipulations, is utilized to manipulation the micro-mirror. The pick-andplace manipulation has the ability to globally position the micro-mirror in six degrees of freedom. The pushing-based manipulation can achieve high positioning accuracy. This microassembly approach has great flexibility and high accuracy; furthermore, it does not require extra supporting plates, which greatly simplifies the assembly process.

  9. Accuracy versus convergence rates for a three dimensional multistage Euler code

    NASA Technical Reports Server (NTRS)

    Turkel, Eli

    1988-01-01

    Using a central difference scheme, it is necessary to add an artificial viscosity in order to reach a steady state. This viscosity usually consists of a linear fourth difference to eliminate odd-even oscillations and a nonlinear second difference to suppress oscillations in the neighborhood of steep gradients. There are free constants in these differences. As one increases the artificial viscosity, the high modes are dissipated more and the scheme converges more rapidly. However, this higher level of viscosity smooths the shocks and eliminates other features of the flow. Thus, there is a conflict between the requirements of accuracy and efficiency. Examples are presented for a variety of three-dimensional inviscid solutions over isolated wings.

  10. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  11. Airborne and ground based lidar measurements of the atmospheric pressure profile

    NASA Technical Reports Server (NTRS)

    Korb, C. Laurence; Schwemmer, Geary K.; Dombrowski, Mark; Weng, Chi Y.

    1989-01-01

    The first high accuracy remote measurements of the atmospheric pressure profile have been made. The measurements were made with a differential absorption lidar system that utilizes tunable alexandrite lasers. The absorption in the trough between two lines in the oxygen A-band near 760 nm was used for probing the atmosphere. Measurements of the two-dimensional structure of the pressure field were made in the troposphere from an aircraft looking down. Also, measurements of the one-dimensional structure were made from the ground looking up. Typical pressure accuracies for the aircraft measurements were 1.5-2 mbar with a 30-m vertical resolution and a 100-shot average (20 s), which corresponds to a 2-km horizontal resolution. Typical accuracies for the upward viewing ground based measurements were 2.0 mbar for a 30-m resolution and a 100-shot average.

  12. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  13. Decimated Input Ensembles for Improved Generalization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)

    1999-01-01

    Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.

  14. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  15. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  16. Application of Linear Discriminant Analysis in Dimensionality Reduction for Hand Motion Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.

    2012-01-01

    The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.

  17. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  18. Assessment of three-dimensional inviscid codes and loss calculations for turbine aerodynamic computations

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1984-01-01

    An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.

  19. Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010

    DTIC Science & Technology

    2010-12-14

    SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the

  20. Two- and three-dimensional accuracy of dental impression materials: effects of storage time and moisture contamination.

    PubMed

    Chandran, Deepa T; Jagger, Daryll C; Jagger, Robert G; Barbour, Michele E

    2010-01-01

    Dental impression materials are used to create an inverse replica of the dental hard and soft tissues, and are used in processes such as the fabrication of crowns and bridges. The accuracy and dimensional stability of impression materials are of paramount importance to the accuracy of fit of the resultant prosthesis. Conventional methods for assessing the dimensional stability of impression materials are two-dimensional (2D), and assess shrinkage or expansion between selected fixed points on the impression. In this study, dimensional changes in four impression materials were assessed using an established 2D and an experimental three-dimensional (3D) technique. The former involved measurement of the distance between reference points on the impression; the latter a contact scanning method for producing a computer map of the impression surface showing localised expansion, contraction and warpage. Dimensional changes were assessed as a function of storage times and moisture contamination comparable to that found in clinical situations. It was evident that dimensional changes observed using the 3D technique were not always apparent using the 2D technique, and that the former offers certain advantages in terms of assessing dimensional accuracy and predictability of impression methods. There are, however, drawbacks associated with 3D techniques such as the more time-consuming nature of the data acquisition and difficulty in statistically analysing the data.

  1. High Accuracy, Two-Dimensional Read-Out in Multiwire Proportional Chambers

    DOE R&D Accomplishments Database

    Charpak, G.; Sauli, F.

    1973-02-14

    In most applications of proportional chambers, especially in high-energy physics, separate chambers are used for measuring different coordinates. In general one coordinate is obtained by recording the pulses from the anode wires around which avalanches have grown. Several methods have been imagined for obtaining the position of an avalanche along a wire. In this article a method is proposed which leads to the same range of accuracies and may be preferred in some cases. The problem of accurate measurements for large-size chamber is also discussed.

  2. Toward On-Demand Deep Brain Stimulation Using Online Parkinson's Disease Prediction Driven by Dynamic Detection.

    PubMed

    Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas

    2017-12-01

    In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.

  3. Evaluation of shoulder pathology: three-dimensional enhanced T1 high-resolution isotropic volume excitation MR vs two-dimensional fast spin echo T2 fat saturation MR.

    PubMed

    Park, H J; Lee, S Y; Kim, M S; Choi, S H; Chung, E C; Kook, S H; Kim, E

    2015-03-01

    To evaluate the diagnostic accuracy of three-dimensional (3D) enhanced T1 high-resolution isotropic volume excitation (eTHRIVE) shoulder MR for the detection of rotator cuff tears, labral lesions and calcific tendonitis of the rotator cuff in comparison with two-dimensional (2D) fast spin echo T2 fat saturation (FS) MR. This retrospective study included 73 patients who underwent shoulder MRI using the eTHRIVE technique. Shoulder MR images were interpreted separately by two radiologists. They evaluated anatomic identification and image quality of the shoulder joint on routine MRI sequences (axial and oblique coronal T2 FS images) and compared them with the reformatted eTHRIVE images. The images were scored on a four-point scale (0, poor; 1, questionable; 2, adequate; 3, excellent) according to the degree of homogeneous and sufficient fat saturation to penetrate bone and soft tissue, visualization of the glenoid labrum and distinction of the supraspinatus tendon (SST). The diagnostic accuracy of eTHRIVE images compared with routine MRI sequences was evaluated in the setting of rotator cuff tears, glenoid labral injuries and calcific tendonitis of the SST. Fat saturation scores for eTHRIVE were significantly higher than those of the T2 FS for both radiologists. The sensitivity and accuracy of the T2 FS in diagnosing rotor cuff tears were >90%, whereas sensitivity and accuracy of the eTHRIVE method were significantly lower. The sensitivity, specificity and accuracy of both images in diagnosing labral injuries and calcific tendonitis were similar and showed no significant differences. The specificity of both images for the diagnosis of labral injuries and calcific tendonitis was higher than the sensitivities. The accuracy of 3D eTHRIVE imaging was comparable to that of 2D FSE T2 FS for the diagnosis of glenoid labral injury and calcific tendonitis of SST. The 3D eTHRIVE technique was superior to 2D FSE T2 FS in terms of fat saturation. Overall, 3D eTHRIVE was inferior to T2 FS in the evaluation of rotator cuff tears because of poor contrast between joint fluid and tendons. The accuracy of 3D eTHRIVE imaging is comparable to that of 2D FSE T2 FS for the diagnosis of glenoid labral injury and calcific tendonitis of SST.

  4. Two-dimensional measures of accuracy in navigational systems

    DOT National Transportation Integrated Search

    1987-03-31

    Two-dimensional measures generally used to depict the accuracy of radiolocation and navigation systems are described in the report. Application to the NAVSTAR Global Positioning System (GPS) is considered, with a number of geometric illustrations.

  5. OPTOTRAK: at last a system with resolution of 10 μm (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Crouch, David G.; Kehl, L.; Krist, J. R.

    1990-08-01

    Northern Digital's first active marker point measurement system, the WATSMART, was begun in 1983. Development ended in 1985 with the manufacture of a highly accurate system, which achieved .15 to .25 mm accuracies in three dimensions within a .75-meter cube. Further improvements in accuracy were rendered meaningless, and a great obstacle to usability was presented by a surplus light problem somewhat incorrectly known as "the reflection problem". In 1985, development of a new system to overcome "the reflection problem" was begun. The advantages and disadvantages involved in the use of active versus passive markers were considered. The implications of using a CCD device as the imaging element in a precision measurement device were analyzed, as were device characteristics such as dynamic range, peak readout noise and charge transfer efficiency. A new type of lens was also designed The end result, in 1988, was the first OPTOTRAK system. This system produces three-dimensional data in real-time and is not at all affected by reflections. Accuracies of 30 microns have been achieved in a 1-meter volume. Each two-dimensional camera actually has two separate, one-dimensional, CCD elements and two separate anamorphic lenses. It can locate a point from 1-8 meters away with a resolution of 1 part in 64,000 and an accuracy of 1 part in 20,000 over the field of view.

  6. CFRP composite mirrors for space telescopes and their micro-dimensional stability

    NASA Astrophysics Data System (ADS)

    Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo

    2010-07-01

    Ultra-lightweight and high-accuracy CFRP (carbon fiber reinforced plastics) mirrors for space telescopes were fabricated to demonstrate their feasibility for light wavelength applications. The CTE (coefficient of thermal expansion) of the all- CFRP sandwich panels was tailored to be smaller than 1×10-7/K. The surface accuracy of mirrors of 150 mm in diameter was 1.8 um RMS as fabricated and the surface smoothness was improved to 20 nm RMS by using a replica technique. Moisture expansion was considered the largest in un-predictable surface preciseness errors. The moisture expansion affected not only homologous shape change but also out-of-plane distortion especially in unsymmetrical compositions. Dimensional stability due to the moisture expansion was compared with a structural mathematical model.

  7. Figure correction of a metallic ellipsoidal neutron focusing mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya

    2015-06-15

    An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less

  8. High-dimensional quantum cloning and applications to quantum hacking

    PubMed Central

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W.; Karimi, Ebrahim

    2017-01-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography. PMID:28168219

  9. High-dimensional quantum cloning and applications to quantum hacking.

    PubMed

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  10. Mapping High Dimensional Sparse Customer Requirements into Product Configurations

    NASA Astrophysics Data System (ADS)

    Jiao, Yao; Yang, Yu; Zhang, Hongshan

    2017-10-01

    Mapping customer requirements into product configurations is a crucial step for product design, while, customers express their needs ambiguously and locally due to the lack of domain knowledge. Thus the data mining process of customer requirements might result in fragmental information with high dimensional sparsity, leading the mapping procedure risk uncertainty and complexity. The Expert Judgment is widely applied against that background since there is no formal requirements for systematic or structural data. However, there are concerns on the repeatability and bias for Expert Judgment. In this study, an integrated method by adjusted Local Linear Embedding (LLE) and Naïve Bayes (NB) classifier is proposed to map high dimensional sparse customer requirements to product configurations. The integrated method adjusts classical LLE to preprocess high dimensional sparse dataset to satisfy the prerequisite of NB for classifying different customer requirements to corresponding product configurations. Compared with Expert Judgment, the adjusted LLE with NB performs much better in a real-world Tablet PC design case both in accuracy and robustness.

  11. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection.

    PubMed

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-09-01

    Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer's instructions. Dimensional changes were measured before and after different disinfection procedures. Dentsply aquasil showed smallest dimensional change (-0.0046%) and impregum penta soft highest linear dimensional changes (-0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method.

  12. A High-Order Method Using Unstructured Grids for the Aeroacoustic Analysis of Realistic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Lockard, David P.

    1999-01-01

    A method for the prediction of acoustic scatter from complex geometries is presented. The discontinuous Galerkin method provides a framework for the development of a high-order method using unstructured grids. The method's compact form contributes to its accuracy and efficiency, and makes the method well suited for distributed memory parallel computing platforms. Mesh refinement studies are presented to validate the expected convergence properties of the method, and to establish the absolute levels of a error one can expect at a given level of resolution. For a two-dimensional shear layer instability wave and for three-dimensional wave propagation, the method is demonstrated to be insensitive to mesh smoothness. Simulations of scatter from a two-dimensional slat configuration and a three-dimensional blended-wing-body demonstrate the capability of the method to efficiently treat realistic geometries.

  13. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    NASA Astrophysics Data System (ADS)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  14. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  15. A highly accurate symmetric optical flow based high-dimensional nonlinear spatial normalization of brain images.

    PubMed

    Wen, Ying; Hou, Lili; He, Lianghua; Peterson, Bradley S; Xu, Dongrong

    2015-05-01

    Spatial normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional spatial normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Accuracy and predictability in use of AO three-dimensionally preformed titanium mesh plates for posttraumatic orbital reconstruction: a pilot study.

    PubMed

    Scolozzi, Paolo; Momjian, Armen; Heuberger, Joris; Andersen, Elene; Broome, Martin; Terzic, Andrej; Jaques, Bertrand

    2009-07-01

    The aim of this study was to prospectively evaluate the accuracy and predictability of new three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction.We analyzed the preoperative and postoperative clinical and radiologic data of 10 patients with isolated blow-out orbital fractures. Fracture locations were as follows: floor (N = 7; 70%), medial wall (N = 1; 1%), and floor/medial wall (N = 2; 2%). The floor fractures were exposed by a standard transconjunctival approach, whereas a combined transcaruncular transconjunctival approach was used in patients with medial wall fractures. A three-dimensional preformed AO titanium mesh plate (0.4 mm in thickness) was selected according to the size of the defect previously measured on the preoperative computed tomographic (CT) scan examination and fixed at the inferior orbital rim with 1 or 2 screws. The accuracy of plate positioning of the reconstructed orbit was assessed on the postoperative CT scan. Coronal CT scan slices were used to measure bony orbital volume using OsiriX Medical Image software. Reconstructed versus uninjured orbital volume were statistically correlated.Nine patients (90%) had a successful treatment outcome without complications. One patient (10%) developed a mechanical limitation of upward gaze with a resulting handicapping diplopia requiring hardware removal. Postoperative orbital CT scan showed an anatomic three-dimensional placement of the orbital mesh plates in all of the patients. Volume data of the reconstructed orbit fitted that of the contralateral uninjured orbit with accuracy to within 2.5 cm(3). There was no significant difference in volume between the reconstructed and uninjured orbits.This preliminary study has demonstrated that three-dimensionally preformed AO titanium mesh plates for posttraumatic orbital wall reconstruction results in (1) a high rate of success with an acceptable rate of major clinical complications (10%) and (2) an anatomic restoration of the bony orbital contour and volume that closely approximates that of the contralateral uninjured orbit.

  17. On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy

    NASA Astrophysics Data System (ADS)

    Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.

    2017-10-01

    High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.

  18. Phase-measuring laser holographic interferometer for use in high speed flows

    NASA Astrophysics Data System (ADS)

    Yanta, William J.; Spring, W. Charles, III; Gross, Kimberly Uhrich; McArthur, J. Craig

    Phase-measurement techniques have been applied to a dual-plate laser holographic interferometer (LHI). This interferometer has been used to determine the flowfield densities in a variety of two-dimensional and axisymmetric flows. In particular, LHI has been applied in three different experiments: flowfield measurements inside a two-dimensional scramjet inlet, flow over a blunt cone, and flow over an indented nose shape. Comparisons of experimentally determined densities with computational results indicate that, when phase-measurement techniques are used in conjunction with state-of-the-art image-processing instrumentation, holographic interferometry can be a diagnostic tool with high resolution, high accuracy, and rapid data retrieval.

  19. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  20. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    PubMed

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  1. Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM

    NASA Technical Reports Server (NTRS)

    Martin, Thomas J.; Dulikravich, George S.

    1993-01-01

    A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.

  2. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  3. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  4. Effects of chemical disinfectant solutions on the stability and accuracy of the dental impression complex.

    PubMed

    Rios, M P; Morgano, S M; Stein, R S; Rose, L

    1996-10-01

    Currently available impression materials were not designed for disinfection or sterilization, and it is conceivable that disinfectants may adversely affect impressions. This study evaluated the accuracy and dimensional stability of polyether (Permadyne/Impregum) and polyvinyl siloxane (Express) impression materials retained by their adhesives in two different acrylic resin tray designs (perforated and nonperforated) when the materials were immersed for either 30 or 60 minutes in three high-level disinfectants. Distilled water and no solution served as controls. A stainless steel test analog similar to ADA specification No. 19 was used. A total of 400 impressions were made with all combinations of impression materials, tray designs, disinfectant, and soaking times. Samples were evaluated microscopically before and after immersion and 48 hours after soaking. Results indicated that these two impression materials were dimensionally stable. Because the results emphasized the stability and accuracy of the impression complex under various conditions, dentists can perform disinfection procedures similar to the protocol of this study without concern for clinically significant distortion of the impression.

  5. Information Gain Based Dimensionality Selection for Classifying Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less

  6. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    NASA Astrophysics Data System (ADS)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  7. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    NASA Astrophysics Data System (ADS)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  8. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less

  9. Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation

    NASA Astrophysics Data System (ADS)

    Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter

    1996-01-01

    The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.

  10. High-Order Moving Overlapping Grid Methodology in a Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Merrill, Brandon E.

    A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.

  11. Optimizing classification performance in an object-based very-high-resolution land use-land cover urban application

    NASA Astrophysics Data System (ADS)

    Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore

    2017-10-01

    This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.

  12. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection

    PubMed Central

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-01-01

    Background: Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. Materials and Methods: The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer’s instructions. Dimensional changes were measured before and after different disinfection procedures. Result: Dentsply aquasil showed smallest dimensional change (−0.0046%) and impregum penta soft highest linear dimensional changes (−0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. Conclusion: The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method. PMID:26435611

  13. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  14. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  15. A VERSATILE SHARP INTERFACE IMMERSED BOUNDARY METHOD FOR INCOMPRESSIBLE FLOWS WITH COMPLEX BOUNDARIES

    PubMed Central

    Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.

    2010-01-01

    A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919

  16. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  17. Assessing the influence of flight parameters, interferometric processing, slope and canopy density on the accuracy of X-band IFSAR-derived forest canopy height models.

    Treesearch

    H.-E. Andersen; R.J. McGaughey; S.E. Reutebuch

    2008-01-01

    High resolution, active remote sensing technologies, such as interferometric synthetic aperture radar (IFSAR) and airborne laser scanning (LIDAR) have the capability to provide forest managers with direct measurements of 3-dimensional forest canopy surface structure. Although LIDAR systems can provide highly accurate measurements of canopy and terrain surfaces, high-...

  18. Chondromalacia of the knee: evaluation with a fat-suppression three-dimensional SPGR imaging after intravenous contrast injection.

    PubMed

    Suh, J S; Cho, J H; Shin, K H; Kim, S J

    1996-01-01

    Twenty-one MRI studies with a fat-suppression three-dimensional spoiled gradient-recalled echo in a steady state (3D SPGR) pulse sequence after intravenous contrast injection were evaluated to assess the accuracy in depicting chondromalacia of the knee. On the basis of MR images, chondromalacia and its grade were determined in each of five articular cartilage regions (total, 105 regions) and then the results were compared to arthroscopic findings. The sensitivity, specificity, and accuracy of MRI were 70%, 99%, and 93%, respectively. MR images depicted 7 of 11 lesions of arthroscopic grade 1 or 2 chondromalacia, and seven of nine lesions of arthroscopic grade 3 or 4 chondromalacia. The cartilage abnormalities in all cases appeared as focal lesions with high signal intensity. Intravenous contrast-injection, fat-suppression 3D SPGR imaging showed high specificity in excluding cartilage abnormalities and may be considered as an alternative to intra-articular MR arthrography when chondromalacia is suspected.

  19. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  20. Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection

    NASA Astrophysics Data System (ADS)

    Mingji, Huang; Geng, Wu; yan, Shan

    2017-11-01

    The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.

  1. An Improved Treatment of External Boundary for Three-Dimensional Flow Computations

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.; Vatsa, Veer N.

    1997-01-01

    We present an innovative numerical approach for setting highly accurate nonlocal boundary conditions at the external computational boundaries when calculating three-dimensional compressible viscous flows over finite bodies. The approach is based on application of the difference potentials method by V. S. Ryaben'kii and extends our previous technique developed for the two-dimensional case. The new boundary conditions methodology has been successfully combined with the NASA-developed code TLNS3D and used for the analysis of wing-shaped configurations in subsonic and transonic flow regimes. As demonstrated by the computational experiments, the improved external boundary conditions allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable speedup of convergence of the multigrid iterations.

  2. [Advances in the research of application of collagen in three-dimensional bioprinting].

    PubMed

    Li, H H; Luo, P F; Sheng, J J; Liu, G C; Zhu, S H

    2016-10-20

    As a new industrial technology with characteristics of high precision and accuracy, the application of three-dimensional bioprinting technology is increasingly wide in the field of medical research. Collagen is one of the most common ingredients in tissue, and it has good biological material properties. There are many reports of using collagen as main composition of " ink" of three-dimensional bioprinting technology. However, the applied collagen is mainly from heterogeneous sources, which may cause some problems in application. Recombinant human source collagen can be obtained from microorganism fermentation by transgenic technology, but more research should be done to confirm its property. This article reviews the advances in the research of collagen and its biological application in three-dimensional bioprinting.

  3. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  4. Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface

    PubMed Central

    Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.

    2014-01-01

    Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079

  5. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  6. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  7. Development of Moire machine vision

    NASA Technical Reports Server (NTRS)

    Harding, Kevin G.

    1987-01-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  8. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    PubMed

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  9. Development of Moire machine vision

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.

    1987-10-01

    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.

  10. Dimensionality of brain networks linked to life-long individual differences in self-control.

    PubMed

    Berman, Marc G; Yourganov, Grigori; Askren, Mary K; Ayduk, Ozlem; Casey, B J; Gotlib, Ian H; Kross, Ethan; McIntosh, Anthony R; Strother, Stephen; Wilson, Nicole L; Zayas, Vivian; Mischel, Walter; Shoda, Yuichi; Jonides, John

    2013-01-01

    The ability to delay gratification in childhood has been linked to positive outcomes in adolescence and adulthood. Here we examine a subsample of participants from a seminal longitudinal study of self-control throughout a subject's life span. Self-control, first studied in children at age 4 years, is now re-examined 40 years later, on a task that required control over the contents of working memory. We examine whether patterns of brain activation on this task can reliably distinguish participants with consistently low and high self-control abilities (low versus high delayers). We find that low delayers recruit significantly higher-dimensional neural networks when performing the task compared with high delayers. High delayers are also more homogeneous as a group in their neural patterns compared with low delayers. From these brain patterns, we can predict with 71% accuracy, whether a participant is a high or low delayer. The present results suggest that dimensionality of neural networks is a biological predictor of self-control abilities.

  11. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  12. Complex wave fields in the interacting one-dimensional Bose gas

    NASA Astrophysics Data System (ADS)

    Pietraszewicz, J.; Deuar, P.

    2018-05-01

    We study the temperature regimes of the one-dimensional interacting gas to determine when the matter wave (c-field) theory is, in fact, correct and usable. The judgment is made by investigating the level of discrepancy in many observables at once in comparison to the exact Yang-Yang theory. We also determine what cutoff maximizes the accuracy of such an approach. Results are given in terms of a bound on accuracy, as well as an optimal cutoff prescription. For a wide range of temperatures the optimal cutoff is independent of density or interaction strength and so its temperature-dependent form is suitable for many cloud shapes and, possibly, basis choices. However, this best global choice is higher in energy than most prior determinations. The high value is needed to obtain the correct kinetic energy, but does not detrimentally affect other observables.

  13. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  14. Accuracy of Cup Positioning With the Computed Tomography-Based Two-dimensional to Three-Dimensional Matched Navigation System: A Prospective, Randomized Controlled Study.

    PubMed

    Yamada, Kazuki; Endo, Hirosuke; Tetsunaga, Tomonori; Miyake, Takamasa; Sanki, Tomoaki; Ozaki, Toshifumi

    2018-01-01

    The accuracy of various navigation systems used for total hip arthroplasty has been described, but no publications reported the accuracy of cup orientation in computed tomography (CT)-based 2D-3D (two-dimensional to three-dimensional) matched navigation. In a prospective, randomized controlled study, 80 hips including 44 with developmental dysplasia of the hips were divided into a CT-based 2D-3D matched navigation group (2D-3D group) and a paired-point matched navigation group (PPM group). The accuracy of cup orientation (absolute difference between the intraoperative record and the postoperative measurement) was compared between groups. Additionally, multiple logistic regression analysis was performed to evaluate patient factors affecting the accuracy of cup orientation in each navigation. The accuracy of cup inclination was 2.5° ± 2.2° in the 2D-3D group and 4.6° ± 3.3° in the PPM group (P = .0016). The accuracy of cup anteversion was 2.3° ± 1.7° in the 2D-3D group and 4.4° ± 3.3° in the PPM group (P = .0009). In the PPM group, the presence of roof osteophytes decreased the accuracy of cup inclination (odds ratio 8.27, P = .0140) and the absolute value of pelvic tilt had a negative influence on the accuracy of cup anteversion (odds ratio 1.27, P = .0222). In the 2D-3D group, patient factors had no effect on the accuracy of cup orientation. The accuracy of cup positioning in CT-based 2D-3D matched navigation was better than in paired-point matched navigation, and was not affected by patient factors. It is a useful system for even severely deformed pelvises such as developmental dysplasia of the hips. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  16. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  17. A heterodyne interferometer for high-performance industrial metrology

    NASA Astrophysics Data System (ADS)

    Schuldt, Thilo; Gohlke, Martin; Weise, Dennis; Johann, Ulrich; Peters, Achim; Braxmaier, Claus

    2008-11-01

    We developed a compact, fiber-coupled heterodyne interferometer for translation and tilt metrology. Noise levels below 5 pm/√Hz in translation and below 10 nrad/√Hz in tilt measurement, both for frequencies above 10-2 Hz, were demonstrated in lab experiments. While this setup was developed with respect to the LISA (Laser Interferometer Space Antenna) space mission current activities focus on its adaptation for dimensional characterization of ultra-stable materials and industrial metrology. The interferometer is used in high-accuracy dilatometry measuring the coefficient of thermal expansion (CTE) of dimensionally highly stable materials such as carbon-fiber reinforced plastic (CFRP) and Zerodur. The facility offers the possibility to measure the CTE with an accuracy better 10-8/K. We also develop a very compact and quasi-monolithic sensor head utilizing ultra-low expansion glass material which is the basis for a future space-qualifiable interferometer setup and serves as a prototype for a sensor head used in industrial environment. For high resolution 3D profilometry and surface property measurements (i. e. roughness, evenness and roundness), a low-noise (<=1nm/√ Hz) actuator will be implemented which enables a scan of the measurement beam over the surface under investigation.

  18. A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case

    NASA Astrophysics Data System (ADS)

    Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.

    2017-12-01

    In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.

  19. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  20. Effect of the high-pitch mode in dual-source computed tomography on the accuracy of three-dimensional volumetry of solid pulmonary nodules: a phantom study.

    PubMed

    Hwang, Sung Ho; Oh, Yu-Whan; Ham, Soo-Youn; Kang, Eun-Young; Lee, Ki Yeol

    2015-01-01

    To evaluate the influence of high-pitch mode (HPM) in dual-source computed tomography (DSCT) on the accuracy of three-dimensional (3D) volumetry for solid pulmonary nodules. A lung phantom implanted with 45 solid pulmonary nodules (n = 15 for each of 4-mm, 6-mm, and 8-mm in diameter) was scanned twice, first in conventional pitch mode (CPM) and then in HPM using DSCT. The relative percentage volume errors (RPEs) of 3D volumetry were compared between the HPM and CPM. In addition, the intermode volume variability (IVV) of 3D volumetry was calculated. In the measurement of the 6-mm and 8-mm nodules, there was no significant difference in RPE (p > 0.05, respectively) between the CPM and HPM (IVVs of 1.2 ± 0.9%, and 1.7 ± 1.5%, respectively). In the measurement of the 4-mm nodules, the mean RPE in the HPM (35.1 ± 7.4%) was significantly greater (p < 0.01) than that in the CPM (18.4 ± 5.3%), with an IVV of 13.1 ± 6.6%. However, the IVVs were in an acceptable range (< 25%), regardless of nodule size. The accuracy of 3D volumetry with HPM for solid pulmonary nodule is comparable to that with CPM. However, the use of HPM may adversely affect the accuracy of 3D volumetry for smaller (< 5 mm in diameter) nodule.

  1. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  2. Accuracy of neuro-navigated cranial screw placement using optical surface imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.

    2017-02-01

    Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.

  3. Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.

    PubMed

    Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon

    2014-10-01

    In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees (ARTBLA). One-way ANOVA found significant differences (P<.05) among all systems for Iy, Ry, Lx, Ly, and twisting. Generally, vertical linear displacements were less likely to reach the threshold of clinical detectability compared with anteroposterior or mediolateral linear displacements. The overall repositioning accuracy of DENSCR was comparable with 4 magnetic mounting systems (DENMAG, ARTBLU, ARTWHI, ARTBLA). DENCON exhibited the worst repositioning accuracy for Iy, Ry, Lx, Ly, and twisting. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  4. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  5. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns.

    PubMed

    Wilms, M; Werner, R; Blendowski, M; Ortmüller, J; Handels, H

    2014-01-01

    A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.

  6. Validation of cone beam computed tomography-based tooth printing using different three-dimensional printing technologies.

    PubMed

    Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde

    2016-03-01

    Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  8. Development of new flux splitting schemes. [computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1992-01-01

    Maximizing both accuracy and efficiency has been the primary objective in designing a numerical algorithm for computational fluid dynamics (CFD). This is especially important for solutions of complex three dimensional systems of Navier-Stokes equations which often include turbulence modeling and chemistry effects. Recently, upwind schemes have been well received for their capability in resolving discontinuities. With this in mind, presented are two new flux splitting techniques for upwind differencing. The first method is based on High-Order Polynomial Expansions (HOPE) of the mass flux vector. The second new flux splitting is based on the Advection Upwind Splitting Method (AUSM). The calculation of the hypersonic conical flow demonstrates the accuracy of the splitting in resolving the flow in the presence of strong gradients. A second series of tests involving the two dimensional inviscid flow over a NACA 0012 airfoil demonstrates the ability of the AUSM to resolve the shock discontinuity at transonic speed. A third case calculates a series of supersonic flows over a circular cylinder. Finally, the fourth case deals with tests of a two dimensional shock wave/boundary layer interaction.

  9. Design and manufacture of customized dental implants by using reverse engineering and selective laser melting technology.

    PubMed

    Chen, Jianyu; Zhang, Zhiguang; Chen, Xianshuai; Zhang, Chunyu; Zhang, Gong; Xu, Zhewu

    2014-11-01

    Recently a new therapeutic concept of patient-specific implant dentistry has been advanced based on computer-aided design/computer-aided manufacturing technology. However, a comprehensive study of the design and 3-dimensional (3D) printing of the customized implants, their mechanical properties, and their biomechanical behavior is lacking. The purpose of this study was to evaluate the mechanical and biomechanical performance of a novel custom-made dental implant fabricated by the selective laser melting technique with simulation and in vitro experimental studies. Two types of customized implants were designed by using reverse engineering: a root-analog implant and a root-analog threaded implant. The titanium implants were printed layer by layer with the selective laser melting technique. The relative density, surface roughness, tensile properties, bend strength, and dimensional accuracy of the specimens were evaluated. Nonlinear and linear finite element analysis and experimental studies were used to investigate the stress distribution, micromotion, and primary stability of the implants. Selective laser melting 3D printing technology was able to reproduce the customized implant designs and produce high density and strength and adequate dimensional accuracy. Better stress distribution and lower maximum micromotions were observed for the root-analog threaded implant model than for the root-analog implant model. In the experimental tests, the implant stability quotient and pull-out strength of the 2 types of implants indicated that better primary stability can be obtained with a root-analog threaded implant design. Selective laser melting proved to be an efficient means of printing fully dense customized implants with high strength and sufficient dimensional accuracy. Adding the threaded characteristic to the customized root-analog threaded implant design maintained the approximate geometry of the natural root and exhibited better stress distribution and primary stability. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Multi-dimensional scores to predict mortality in patients with idiopathic pulmonary fibrosis undergoing lung transplantation assessment.

    PubMed

    Fisher, Jolene H; Al-Hejaili, Faris; Kandel, Sonja; Hirji, Alim; Shapera, Shane; Mura, Marco

    2017-04-01

    The heterogeneous progression of idiopathic pulmonary fibrosis (IPF) makes prognostication difficult and contributes to high mortality on the waitlist for lung transplantation (LTx). Multi-dimensional scores (Composite Physiologic index [CPI], [Gender-Age-Physiology [GAP]; RIsk Stratification scorE [RISE]) demonstrated enhanced predictive power towards outcome in IPF. The lung allocation score (LAS) is a multi-dimensional tool commonly used to stratify patients assessed for LTx. We sought to investigate whether IPF-specific multi-dimensional scores predict mortality in patients with IPF assessed for LTx. The study included 302 patients with IPF who underwent a LTx assessment (2003-2014). Multi-dimensional scores were calculated. The primary outcome was 12-month mortality after assessment. LTx was considered as competing event in all analyses. At the end of the observation period, there were 134 transplants, 63 deaths, and 105 patients were alive without LTx. Multi-dimensional scores predicted mortality with accuracy similar to LAS, and superior to that of individual variables: area under the curve (AUC) for LAS was 0.78 (sensitivity 71%, specificity 86%); CPI 0.75 (sensitivity 67%, specificity 82%); GAP 0.67 (sensitivity 59%, specificity 74%); RISE 0.78 (sensitivity 71%, specificity 84%). A separate analysis conducted only in patients actively listed for LTx (n = 247; 50 deaths) yielded similar results. In patients with IPF assessed for LTx as well as in those actually listed, multi-dimensional scores predict mortality better than individual variables, and with accuracy similar to the LAS. If validated, multi-dimensional scores may serve as inexpensive tools to guide decisions on the timing of referral and listing for LTx. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. 3D evaluation of the effect of disinfectants on dimensional accuracy and stability of two elastomeric impression materials.

    PubMed

    Soganci, Gokce; Cinar, Duygu; Caglar, Alper; Yagiz, Ayberk

    2018-05-31

    The aim of this study was to determine and compare the dimensional changes of polyether and vinyl polyether siloxane impression materials under immersion disinfection with two different disinfectants in three time periods. Impressions were obtained from an edentulous master model. Sodium hypochlorite (5.25%) and glutaraldehyde (2%) were used for disinfection and measurements were done 30 min later after making impression before disinfection, after required disinfection period (10 min), and after 24 h storage at room temperature. Impressions were scanned using 3D scanner with 10 microns accuracy and 3D software was used to evaluate the dimensional changes with superimpositioning. Positive and negative deviations were calculated and compared with master model. There was no significant difference between two elastomeric impression materials (p>0.05). It was concluded that dimensional accuracy and stability of two impression materials were excellent and similar.

  12. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  13. Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data

    NASA Astrophysics Data System (ADS)

    Kamnev, I. S.; Seredovich, V. A.

    2017-12-01

    The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.

  14. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  15. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  16. Dimensional accuracy of aluminium extrusions in mechanical calibration

    NASA Astrophysics Data System (ADS)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  17. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems, task 1: Ducted propfan analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1990-01-01

    The time-dependent three-dimensional Euler equations of gas dynamics were solved numerically to study the steady compressible transonic flow about ducted propfan propulsion systems. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. An implicit residual smoothing operator was used to aid convergence. Two calculation grids were employed in this study. The first grid utilized an H-type mesh network with a branch cut opening to represent the axisymmetric cowl. The second grid utilized a multiple-block mesh system with a C-type grid about the cowl. The individual blocks were numerically coupled in the Euler solver. Grid systems were generated by a combined algebraic/elliptic algortihm developed specifically for ducted propfans. Numerical calculations were initially performed for unducted propfans to verify the accuracy of the three-dimensional Euler formulation. The Euler analyses were then applied for the calculation of ducted propfan flows, and predicted results were compared with experimental data for two cases. The three-dimensional Euler analyses displayed exceptional accuracy, although certain parameters were observed to be very sensitive to geometric deflections. Both solution schemes were found to be very robust and demonstrated nearly equal efficiency and accuracy, although it was observed that the multi-block C-grid formulation provided somewhat better resolution of the cowl leading edge region.

  18. High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".

    PubMed

    Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo

    2004-03-01

    Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.

  19. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  20. High frequency estimation of 2-dimensional cavity scattering

    NASA Astrophysics Data System (ADS)

    Dering, R. S.

    1984-12-01

    This thesis develops a simple ray tracing approximation for the high frequency scattering from a two-dimensional cavity. Whereas many other cavity scattering algorithms are very time consuming, this method is very swift. The analytical development of the ray tracing approach is performed in great detail, and it is shown how the radar cross section (RCS) depends on the cavity's length and width along with the radar wave's angle of incidence. This explains why the cavity's RCS oscillates as a function of incident angle. The RCS of a two dimensional cavity was measured experimentally, and these results were compared to computer calculations based on the high frequency ray tracing theory. The comparison was favorable in the sense that angular RCS minima and maxima were exactly predicted even though accuracy of the RCS magnitude decreased for incident angles far off-axis. Overall, once this method is extended to three dimensions, the technique shows promise as a fast first approximation of high frequency cavity scattering.

  1. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  2. Generalized Centroid Estimators in Bioinformatics

    PubMed Central

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  3. Missing value imputation for gene expression data by tailored nearest neighbors.

    PubMed

    Faisal, Shahla; Tutz, Gerhard

    2017-04-25

    High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.

  4. High-accuracy user identification using EEG biometrics.

    PubMed

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  5. Magnetic resonance imaging-three-dimensional printing technology fabricates customized scaffolds for brain tissue engineering

    PubMed Central

    Fu, Feng; Qin, Zhe; Xu, Chao; Chen, Xu-yi; Li, Rui-xin; Wang, Li-na; Peng, Ding-wei; Sun, Hong-tao; Tu, Yue; Chen, Chong; Zhang, Sai; Zhao, Ming-liang; Li, Xiao-hong

    2017-01-01

    Conventional fabrication methods lack the ability to control both macro- and micro-structures of generated scaffolds. Three-dimensional printing is a solid free-form fabrication method that provides novel ways to create customized scaffolds with high precision and accuracy. In this study, an electrically controlled cortical impactor was used to induce randomized brain tissue defects. The overall shape of scaffolds was designed using rat-specific anatomical data obtained from magnetic resonance imaging, and the internal structure was created by computer-aided design. As the result of limitations arising from insufficient resolution of the manufacturing process, we magnified the size of the cavity model prototype five-fold to successfully fabricate customized collagen-chitosan scaffolds using three-dimensional printing. Results demonstrated that scaffolds have three-dimensional porous structures, high porosity, highly specific surface areas, pore connectivity and good internal characteristics. Neural stem cells co-cultured with scaffolds showed good viability, indicating good biocompatibility and biodegradability. This technique may be a promising new strategy for regenerating complex damaged brain tissues, and helps pave the way toward personalized medicine. PMID:28553343

  6. A mixed finite difference/Galerkin method for three-dimensional Rayleigh-Benard convection

    NASA Technical Reports Server (NTRS)

    Buell, Jeffrey C.

    1988-01-01

    A fast and accurate numerical method, for nonlinear conservation equation systems whose solutions are periodic in two of the three spatial dimensions, is presently implemented for the case of Rayleigh-Benard convection between two rigid parallel plates in the parameter region where steady, three-dimensional convection is known to be stable. High-order streamfunctions secure the reduction of the system of five partial differential equations to a system of only three. Numerical experiments are presented which verify both the expected convergence rates and the absolute accuracy of the method.

  7. Finite Differences and Collocation Methods for the Solution of the Two Dimensional Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules

    1999-01-01

    In this paper we combine finite difference approximations (for spatial derivatives) and collocation techniques (for the time component) to numerically solve the two dimensional heat equation. We employ respectively a second-order and a fourth-order schemes for the spatial derivatives and the discretization method gives rise to a linear system of equations. We show that the matrix of the system is non-singular. Numerical experiments carried out on serial computers, show the unconditional stability of the proposed method and the high accuracy achieved by the fourth-order scheme.

  8. Practical limits on muscle synergy identification by non-negative matrix factorization in systems with mechanical constraints.

    PubMed

    Burkholder, Thomas J; van Antwerp, Keith W

    2013-02-01

    Statistical decomposition, including non-negative matrix factorization (NMF), is a convenient tool for identifying patterns of structured variability within behavioral motor programs, but it is unclear how the resolved factors relate to actual neural structures. Factors can be extracted from a uniformly sampled, low-dimension command space. In practical application, the command space is limited, either to those activations that perform some task(s) successfully or to activations induced in response to specific perturbations. NMF was applied to muscle activation patterns synthesized from low dimensional, synergy-like control modules mimicking simple task performance or feedback activation from proprioceptive signals. In the task-constrained paradigm, the accuracy of control module recovery was highly dependent on the sampled volume of control space, such that sampling even 50% of control space produced a substantial degradation in factor accuracy. In the feedback paradigm, NMF was not capable of extracting more than four control modules, even in a mechanical model with seven internal degrees of freedom. Reduced access to the low-dimensional control space imposed by physical constraints may result in substantial distortion of an existing low dimensional controller, such that neither the dimensionality nor the composition of the recovered/extracted factors match the original controller.

  9. Interrelated Dimensional Chains in Predicting Accuracy of Turbine Wheel Assembly Parameters

    NASA Astrophysics Data System (ADS)

    Yanyukina, M. V.; Bolotov, M. A.; Ruzanov, N. V.

    2018-03-01

    The working capacity of any device primarily depends on the assembly accuracy which, in its turn, is determined by the quality of each part manufactured, i.e., the degree of conformity between final geometrical parameters and the set ones. However, the assembly accuracy depends not only on a qualitative manufacturing process but also on the assembly process correctness. In this connection, there were preliminary calculations of assembly stages in terms of conformity to real geometrical parameters with their permissible values. This task is performed by means of the calculation of dimensional chains. The calculation of interrelated dimensional chains in the aircraft industry requires particular attention. The article considers the issues of dimensional chain calculation modelling by the example of the turbine wheel assembly process. The authors described the solution algorithm in terms of mathematical statistics implemented in Matlab. The paper demonstrated the results of a dimensional chain calculation for a turbine wheel in relation to the draw of turbine blades to the shroud ring diameter. Besides, the article provides the information on the influence of a geometrical parameter tolerance for the dimensional chain link elements on a closing one.

  10. Three-dimensional laser window formation for industrial application

    NASA Technical Reports Server (NTRS)

    Verhoff, Vincent G.; Kowalski, David

    1993-01-01

    The NASA Lewis Research Center has developed and implemented a unique process for forming flawless three-dimensional, compound-curvature laser windows to extreme accuracies. These windows represent an integral component of specialized nonintrusive laser data acquisition systems that are used in a variety of compressor and turbine research testing facilities. These windows are molded to the flow surface profile of turbine and compressor casings and are required to withstand extremely high pressures and temperatures. This method of glass formation could also be used to form compound-curvature mirrors that would require little polishing and for a variety of industrial applications, including research view ports for testing devices and view ports for factory machines with compound-curvature casings. Currently, sodium-alumino-silicate glass is recommended for three-dimensional laser windows because of its high strength due to chemical strengthening and its optical clarity. This paper discusses the main aspects of three-dimensional laser window formation. It focuses on the unique methodology and the peculiarities that are associated with the formation of these windows.

  11. An extended Lagrangian method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1992-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. Unlike the Lagrangian method previously imposed which is valid only for supersonic flows, the present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  12. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  13. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  14. A novel potential/viscous flow coupling technique for computing helicopter flow fields

    NASA Technical Reports Server (NTRS)

    Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul

    1993-01-01

    The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.

  15. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  16. JASMINE: Infrared Space Astrometry Mission

    NASA Astrophysics Data System (ADS)

    Gouda, Naoteru; Working Group, Jasmine

    JASMINE is an astrometry satellite mission that measures in an infrared band annual parallaxes, positions on the celestial sphere, and proper motions of stars in the bulge of the Milky Way (the Galaxy) with high accuracies. These measurements give us 3-dimensional positions and 2-dimensional velocities (tangential velocities) of many stars in the Galactic bulge. A completely new “map” of the Galactic bulge given by JASMINE will bring us many exciting scientific results. A target launch date is the first half of the 2020s. Before the launch of JASMINE, we are planning two other missions; Nano-JASMINE and Small-JASMINE. Nano-JASMINE uses a very small nano-satellite and it is determined to be launched in 2011. Small-JASMINE is a downsized version of JASMINE satellite which observes toward restricted small regions of the Galactic bulge. These satellite missions need severe stability of the pointing of telescopes and furthermore high stability of telescope structures to measure stellar positions with high accuracies. This fact requires severe control of the pointing of telescopes and thermal control in payload modules. The control systems are very important keys for success of space astrometry missions including the series of JASMINE missions.

  17. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  18. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    NASA Astrophysics Data System (ADS)

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  19. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Treesearch

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  20. Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine

    NASA Astrophysics Data System (ADS)

    Spodniak, Miroslav; Klimko, Marek; Hocko, Marián; Žitek, Pavel

    This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.

  1. Two-Dimensional Electronic Spectroscopy of Benzene, Phenol, and Their Dimer: An Efficient First-Principles Simulation Protocol.

    PubMed

    Nenov, Artur; Mukamel, Shaul; Garavelli, Marco; Rivalta, Ivan

    2015-08-11

    First-principles simulations of two-dimensional electronic spectroscopy in the ultraviolet region (2DUV) require computationally demanding multiconfigurational approaches that can resolve doubly excited and charge transfer states, the spectroscopic fingerprints of coupled UV-active chromophores. Here, we propose an efficient approach to reduce the computational cost of accurate simulations of 2DUV spectra of benzene, phenol, and their dimer (i.e., the minimal models for studying electronic coupling of UV-chromophores in proteins). We first establish the multiconfigurational recipe with the highest accuracy by comparison with experimental data, providing reference gas-phase transition energies and dipole moments that can be used to construct exciton Hamiltonians involving high-lying excited states. We show that by reducing the active spaces and the number of configuration state functions within restricted active space schemes, the computational cost can be significantly decreased without loss of accuracy in predicting 2DUV spectra. The proposed recipe has been successfully tested on a realistic model proteic system in water. Accounting for line broadening due to thermal and solvent-induced fluctuations allows for direct comparison with experiments.

  2. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media

    NASA Astrophysics Data System (ADS)

    Bruno, O. P.; Pérez-Arancibia, C.

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76, 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  3. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media.

    PubMed

    Bruno, O P; Pérez-Arancibia, C

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76 , 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  4. Disinfection procedures: their efficacy and effect on dimensional accuracy and surface quality of an irreversible hydrocolloid impression material.

    PubMed

    Rentzia, A; Coleman, D C; O'Donnell, M J; Dowling, A H; O'Sullivan, M

    2011-02-01

    This study investigated the antibacterial efficacy and effect of 0.55% ortho-phthalaldehyde (Cidex OPA(®)) and 0.5% sodium hypochlorite (NaOCl) on the dimensional accuracy and surface quality of gypsum casts retrieved from an irreversible hydrocolloid impression material. A simulated clinical cast and technique was developed to compare the dimensional accuracy and surface quality changes of the test gypsum casts with controls. Dimensional accuracy measurements were completed between fixed points using a travelling microscope under low angle illumination at a magnification of ×3. Surface quality changes of "smooth" and "rough" areas on the cast were evaluated by means of optical profilometry. The efficacy of the disinfection procedures against Pseudomonas aeruginosa was evaluated by determining the number of colony forming units (cfu) recovered after disinfection of alginate discs inoculated with 1×10⁶cfu for defined intervals. The dimensional accuracy of the gypsum casts was not significantly affected by the disinfection protocols. Neither disinfectant solution nor immersion time had an effect on the surface roughness of the "smooth" area on the cast, however, a significant increase in surface roughness was observed with increasing immersion time for the "rough" surface. Complete elimination of viable Pseudomonas aeruginosa cells from alginate discs was obtained after 30 and 120 s immersion in Cidex OPA(®) and NaOCl, respectively. Immersion of irreversible hydrocolloid impressions in Cidex OPA(®) for 30 s was proved to be the most effective disinfection procedure. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Diagnosing Autism Spectrum Disorder from Brain Resting-State Functional Connectivity Patterns Using a Deep Neural Network with a Novel Feature Selection Method.

    PubMed

    Guo, Xinyu; Dominick, Kelli C; Minai, Ali A; Li, Hailong; Erickson, Craig A; Lu, Long J

    2017-01-01

    The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t -test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t -test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided.

  6. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  7. Influence of Spatial Resolution in Three-dimensional Cine Phase Contrast Magnetic Resonance Imaging on the Accuracy of Hemodynamic Analysis

    PubMed Central

    Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi

    2017-01-01

    Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996

  8. Three-dimensional virtual navigation versus conventional image guidance: A randomized controlled trial.

    PubMed

    Dixon, Benjamin J; Chan, Harley; Daly, Michael J; Qiu, Jimmy; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C

    2016-07-01

    Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. N/A. Laryngoscope, 126:1510-1515, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Use of dynamic 3-dimensional transvaginal and transrectal ultrasonography to assess posterior pelvic floor dysfunction related to obstructed defecation.

    PubMed

    Murad-Regadas, Sthela M; Regadas Filho, Francisco Sergio Pinheiro; Regadas, Francisco Sergio Pinheiro; Rodrigues, Lusmar Veras; de J R Pereira, Jacyara; da S Fernandes, Graziela Olivia; Dealcanfreitas, Iris Daiana; Mendonca Filho, Jose Jader

    2014-02-01

    New ultrasound techniques may complement current diagnostic tools, and combined techniques may help to overcome the limitations of individual techniques for the diagnosis of anorectal dysfunction. A high degree of agreement has been demonstrated between echodefecography (dynamic 3-dimensional anorectal ultrasonography) and conventional defecography. Our aim was to evaluate the ability of a combined approach consisting of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a 3-dimensional biplane endoprobe to assess posterior pelvic floor dysfunctions related to obstructed defecation syndrome in comparison with echodefecography. This was a prospective, observational cohort study conducted at a tertiary-care hospital. Consecutive female patients with symptoms of obstructed defecation were eligible. Each patient underwent assessment of posterior pelvic floor dysfunctions with a combination of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a biplane transducer and with echodefecography. Kappa (κ) was calculated as an index of agreement between the techniques. Diagnostic accuracy (sensitivity, specificity, and positive and negative predictive values) of the combined technique in detection of posterior dysfunctions was assessed with echodefecography as the standard for comparison. A total of 33 women were evaluated. Substantial agreement was observed regarding normal relaxation and anismus. In detecting the absence or presence of rectocele, the 2 methods agreed in all cases. Near-perfect agreement was found for rectocele grade I, grade II, and grade III. Perfect agreement was found for entero/sigmoidocele, with near-perfect agreement for rectal intussusception. Using echodefecography as the standard for comparison, we found high diagnostic accuracy of transvaginal and transrectal ultrasonography in the detection of posterior dysfunctions. This combined technique should be compared with other dynamic techniques and validated with conventional defecography. Dynamic 3-dimensional transvaginal and transrectal ultrasonography is a simple and fast ultrasound technique that shows strong agreement with echodefecography and may be used as an alternative method to assess patients with obstructed defecation syndrome.

  10. The impact of the fabrication method on the three-dimensional accuracy of an implant surgery template.

    PubMed

    Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim

    2017-06-01

    The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. Effect of the High-Pitch Mode in Dual-Source Computed Tomography on the Accuracy of Three-Dimensional Volumetry of Solid Pulmonary Nodules: A Phantom Study

    PubMed Central

    Hwang, Sung Ho; Ham, Soo-Youn; Kang, Eun-Young; Lee, Ki Yeol

    2015-01-01

    Objective To evaluate the influence of high-pitch mode (HPM) in dual-source computed tomography (DSCT) on the accuracy of three-dimensional (3D) volumetry for solid pulmonary nodules. Materials and Methods A lung phantom implanted with 45 solid pulmonary nodules (n = 15 for each of 4-mm, 6-mm, and 8-mm in diameter) was scanned twice, first in conventional pitch mode (CPM) and then in HPM using DSCT. The relative percentage volume errors (RPEs) of 3D volumetry were compared between the HPM and CPM. In addition, the intermode volume variability (IVV) of 3D volumetry was calculated. Results In the measurement of the 6-mm and 8-mm nodules, there was no significant difference in RPE (p > 0.05, respectively) between the CPM and HPM (IVVs of 1.2 ± 0.9%, and 1.7 ± 1.5%, respectively). In the measurement of the 4-mm nodules, the mean RPE in the HPM (35.1 ± 7.4%) was significantly greater (p < 0.01) than that in the CPM (18.4 ± 5.3%), with an IVV of 13.1 ± 6.6%. However, the IVVs were in an acceptable range (< 25%), regardless of nodule size. Conclusion The accuracy of 3D volumetry with HPM for solid pulmonary nodule is comparable to that with CPM. However, the use of HPM may adversely affect the accuracy of 3D volumetry for smaller (< 5 mm in diameter) nodule. PMID:25995695

  12. High-Speed Quantum Key Distribution Using Photonic Integrated Circuits

    DTIC Science & Technology

    2013-01-01

    protocol [14] that uses energy-time entanglement of pairs of photons. We are employing the QPIC architecture to implement a novel high-dimensional disper...continuous Hilbert spaces using measures of the covariance matrix. Although we focus the discussion on a scheme employing entangled photon pairs...is the probability that parameter estimation fails [20]. The parameter ε̄ accounts for the accuracy of estimating the smooth min- entropy , which

  13. Comparison of Dimensional Accuracy between Open-Tray and Closed-Tray Implant Impression Technique in 15° Angled Implants

    PubMed Central

    Balouch, F; Jalalian, E; Nikkheslat, M; Ghavamian, R; Toopchi, Sh; Jallalian, F; Jalalian, S

    2013-01-01

    Statement of Problem: Various impression techniques have different effects on the accuracy of final cast dimensions. Meanwhile; there are some controversies about the best technique. Purpose: This study was performed to compare two kinds of implant impression methods (open tray and closed tray) on 15 degree angled implants. Materials and Method: In this experimental study, a steel model with 8 cm in diameter and 3 cm in height were produced with 3 holes devised inside to stabilize 3 implants. The central implant was straight and the other two implants were 15° angled. The two angled implants had 5 cm distance from each other and 3.5 cm from the central implant. Dental stone, high strength (type IV) was used for the main casts. Impression trays were filled with poly ether, and then the two impression techniques (open tray and closed tray) were compared. To evaluate positions of the implants, each cast was analyzed by CMM device in 3 dimensions (x,y,z). Differences in the measurements obtained from final casts and laboratory model were analyzed using t-Test. Results: The obtained results indicated that closed tray impression technique was significantly different in dimensional accuracy when compared with open tray method. Dimensional changes were 129 ± 37μ and 143.5 ± 43.67μ in closed tray and open tray, while coefficient of variation in closed- tray and open tray were reported to be 27.2% and 30.4%, respectively. Conclusion: Closed impression technique had less dimensional changes in comparison with open tray method, so this study suggests that closed tray impression technique is more accurate. PMID:24724130

  14. Model based LV-reconstruction in bi-plane x-ray angiography

    NASA Astrophysics Data System (ADS)

    Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz

    2005-04-01

    Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.

  15. Estimation of the velocity and trajectory of three-dimensional reaching movements from non-invasive magnetoencephalography signals

    NASA Astrophysics Data System (ADS)

    Yeom, Hong Gi; Sic Kim, June; Chung, Chun Kee

    2013-04-01

    Objective. Studies on the non-invasive brain-machine interface that controls prosthetic devices via movement intentions are at their very early stages. Here, we aimed to estimate three-dimensional arm movements using magnetoencephalography (MEG) signals with high accuracy. Approach. Whole-head MEG signals were acquired during three-dimensional reaching movements (center-out paradigm). For movement decoding, we selected 68 MEG channels in motor-related areas, which were band-pass filtered using four subfrequency bands (0.5-8, 9-22, 25-40 and 57-97 Hz). After the filtering, the signals were resampled, and 11 data points preceding the current data point were used as features for estimating velocity. Multiple linear regressions were used to estimate movement velocities. Movement trajectories were calculated by integrating estimated velocities. We evaluated our results by calculating correlation coefficients (r) between real and estimated velocities. Main results. Movement velocities could be estimated from the low-frequency MEG signals (0.5-8 Hz) with significant and considerably high accuracy (p <0.001, mean r > 0.7). We also showed that preceding (60-140 ms) MEG signals are important to estimate current movement velocities and the intervals of brain signals of 200-300 ms are sufficient for movement estimation. Significance. These results imply that disabled people will be able to control prosthetic devices without surgery in the near future.

  16. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  17. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  18. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  19. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  20. Edge technique lidar for high accuracy, high spatial resolution wind measurement in the Planetary Boundary Layer

    NASA Technical Reports Server (NTRS)

    Korb, C. L.; Gentry, Bruce M.

    1995-01-01

    The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.

  1. Second order symmetry-preserving conservative Lagrangian scheme for compressible Euler equations in two-dimensional cylindrical coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Juan, E-mail: cheng_juan@iapcm.ac.cn; Shu, Chi-Wang, E-mail: shu@dam.brown.edu

    In applications such as astrophysics and inertial confinement fusion, there are many three-dimensional cylindrical-symmetric multi-material problems which are usually simulated by Lagrangian schemes in the two-dimensional cylindrical coordinates. For this type of simulation, a critical issue for the schemes is to keep spherical symmetry in the cylindrical coordinate system if the original physical problem has this symmetry. In the past decades, several Lagrangian schemes with such symmetry property have been developed, but all of them are only first order accurate. In this paper, we develop a second order cell-centered Lagrangian scheme for solving compressible Euler equations in cylindrical coordinates, basedmore » on the control volume discretizations, which is designed to have uniformly second order accuracy and capability to preserve one-dimensional spherical symmetry in a two-dimensional cylindrical geometry when computed on an equal-angle-zoned initial grid. The scheme maintains several good properties such as conservation for mass, momentum and total energy, and the geometric conservation law. Several two-dimensional numerical examples in cylindrical coordinates are presented to demonstrate the good performance of the scheme in terms of accuracy, symmetry, non-oscillation and robustness. The advantage of higher order accuracy is demonstrated in these examples.« less

  2. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  3. Thermal model development and validation for rapid filling of high pressure hydrogen tanks

    DOE PAGES

    Johnson, Terry A.; Bozinoski, Radoslav; Ye, Jianjun; ...

    2015-06-30

    This paper describes the development of thermal models for the filling of high pressure hydrogen tanks with experimental validation. Two models are presented; the first uses a one-dimensional, transient, network flow analysis code developed at Sandia National Labs, and the second uses the commercially available CFD analysis tool Fluent. These models were developed to help assess the safety of Type IV high pressure hydrogen tanks during the filling process. The primary concern for these tanks is due to the increased susceptibility to fatigue failure of the liner caused by the fill process. Thus, a thorough understanding of temperature changes ofmore » the hydrogen gas and the heat transfer to the tank walls is essential. The effects of initial pressure, filling time, and fill procedure were investigated to quantify the temperature change and verify the accuracy of the models. In this paper we show that the predictions of mass averaged gas temperature for the one and three-dimensional models compare well with the experiment and both can be used to make predictions for final mass delivery. Furthermore, due to buoyancy and other three-dimensional effects, however, the maximum wall temperature cannot be predicted using one-dimensional tools alone which means that a three-dimensional analysis is required for a safety assessment of the system.« less

  4. Accuracy of templates for navigated implantation made by rapid prototyping with DICOM datasets of cone beam computer tomography (CBCT).

    PubMed

    Weitz, Jochen; Deppe, Herbert; Stopp, Sebastian; Lueth, Tim; Mueller, Steffen; Hohlweg-Majert, Bettina

    2011-12-01

    The aim of this study is to evaluate the accuracy of a surgical template-aided implant placement produced by rapid prototyping using a DICOM dataset from cone beam computer tomography (CBCT). On the basis of CBCT scans (Sirona® Galileos), a total of ten models were produced using a rapid-prototyping three-dimensional printer. On the same patients, impressions were performed to compare fitting accuracy of both methods. From the models made by impression, templates were produced and accuracy was compared and analyzed with the rapid-prototyping model. Whereas templates made by conventional procedure had an excellent accuracy, the fitting accuracy of those produced by DICOM datasets was not sufficient. Deviations ranged between 2.0 and 3.5 mm, after modification of models between 1.4 and 3.1 mm. The findings of this study suggest that the accuracy of the low-dose Sirona Galileos® DICOM dataset seems to show a high deviation, which is not useable for accurate surgical transfer for example in implant surgery.

  5. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  6. Utilisation of three-dimensional printed heart models for operative planning of complex congenital heart defects.

    PubMed

    Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef

    2017-01-01

    To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.

  7. One-dimensional thermal evolution calculation based on a mixing length theory: Application to Saturnian icy satellites

    NASA Astrophysics Data System (ADS)

    Kamata, S.

    2017-12-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.

  8. A comparison of CT-based navigation techniques for minimally invasive lumbar pedicle screw placement.

    PubMed

    Wood, Martin; Mannion, Richard

    2011-02-01

    A comparison of 2 surgical techniques. To determine the relative accuracy of minimally invasive lumbar pedicle screw placement using 2 different CT-based image-guided techniques. Three-dimensional intraoperative fluoroscopy systems have recently become available that provide the ability to use CT-quality images for navigation during image-guided minimally invasive spinal surgery. However, the cost of this equipment may negate any potential benefit in navigational accuracy. We therefore assess the accuracy of pedicle screw placement using an intraoperative 3-dimensional fluoroscope for guidance compared with a technique using preoperative CT images merged to intraoperative 2-dimensional fluoroscopy. Sixty-seven patients undergoing minimally invasive placement of lumbar pedicle screws (296 screws) using a navigated, image-guided technique were studied and the accuracy of pedicle screw placement assessed. Electromyography (EMG) monitoring of lumbar nerve roots was used in all. Group 1: 24 patients in whom a preoperative CT scan was merged with intraoperative 2-dimensional fluoroscopy images on the image-guidance system. Group 2: 43 patients using intraoperative 3-dimensional fluoroscopy images as the source for the image guidance system. The frequencies of pedicle breach and EMG warnings (indicating potentially unsafe screw placement) in each group were recorded. The rate of pedicle screw misplacement was 6.4% in group 1 vs 1.6% in group 2 (P=0.03). There were no cases of neurologic injury from suboptimal placement of screws. Additionally, the incidence of EMG warnings was significantly lower in group 2 (3.7% vs. 10% (P=0.03). The use of an intraoperative 3-dimensional fluoroscopy system with an image-guidance system results in greater accuracy of pedicle screw placement than the use of preoperative CT scans, although potentially dangerous placement of pedicle screws can be prevented by the use of EMG monitoring of lumbar nerve roots.

  9. Application of Fuzzy c-Means and Joint-Feature-Clustering to Detect Redundancies of Image-Features in Drug Combinations Studies of Breast Cancer

    NASA Astrophysics Data System (ADS)

    Brandl, Miriam B.; Beck, Dominik; Pham, Tuan D.

    2011-06-01

    The high dimensionality of image-based dataset can be a drawback for classification accuracy. In this study, we propose the application of fuzzy c-means clustering, cluster validity indices and the notation of a joint-feature-clustering matrix to find redundancies of image-features. The introduced matrix indicates how frequently features are grouped in a mutual cluster. The resulting information can be used to find data-derived feature prototypes with a common biological meaning, reduce data storage as well as computation times and improve the classification accuracy.

  10. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  11. Recollection can be Weak and Familiarity can be Strong

    PubMed Central

    Ingram, Katherine M.; Mickes, Laura; Wixted, John T.

    2012-01-01

    The Remember/Know procedure is widely used to investigate recollection and familiarity in recognition memory, but almost all of the results obtained using that procedure can be readily accommodated by a unidimensional model based on signal-detection theory. The unidimensional model holds that Remember judgments reflect strong memories (associated with high confidence, high accuracy, and fast reaction times), whereas Know judgments reflect weaker memories (associated with lower confidence, lower accuracy, and slower reaction times). Although this is invariably true on average, a new two-dimensional account (the Continuous Dual-Process model) suggests that Remember judgments made with low confidence should be associated with lower old/new accuracy, but higher source accuracy, than Know judgments made with high confidence. We tested this prediction – and found evidence to support it – using a modified Remember/Know procedure in which participants were first asked to indicate a degree of recollection-based or familiarity-based confidence for each word presented on a recognition test and were then asked to recollect the color (red or blue) and screen location (top or bottom) associated with the word at study. For familiarity-based decisions, old/new accuracy increased with old/new confidence, but source accuracy did not (suggesting that stronger old/new memory was supported by higher degrees of familiarity). For recollection-based decisions, both old/new accuracy and source accuracy increased with old/new confidence (suggesting that stronger old/new memory was supported by higher degrees of recollection). These findings suggest that recollection and familiarity are continuous processes and that participants can indicate which process mainly contributed to their recognition decisions. PMID:21967320

  12. Third-order dissipative hydrodynamics from the entropy principle

    NASA Astrophysics Data System (ADS)

    El, Andrej; Xu, Zhe; Greiner, Carsten

    2010-06-01

    We review the entropy based derivation of third-order hydrodynamic equations and compare their solutions in one-dimensional boost-invariant geometry with calculations by the partonic cascade BAMPS. We demonstrate that Grad's approximation, which underlies the derivation of both Israel-Stewart and third-order equations, describes the transverse spectra from BAMPS with high accuracy. At the same time solutions of third-order equations are much closer to BAMPS results than solutions of Israel-Stewart equations. Introducing a resummation scheme for all higher-oder corrections to one-dimensional hydrodynamic equation we demonstrate the importance of higher-order terms if the Knudsen number is large.

  13. Computing interior eigenvalues of nonsymmetric matrices: application to three-dimensional metamaterial composites.

    PubMed

    Terao, Takamichi

    2010-08-01

    We propose a numerical method to calculate interior eigenvalues and corresponding eigenvectors for nonsymmetric matrices. Based on the subspace projection technique onto expanded Ritz subspace, it becomes possible to obtain eigenvalues and eigenvectors with sufficiently high precision. This method overcomes the difficulties of the traditional nonsymmetric Lanczos algorithm, and improves the accuracy of the obtained interior eigenvalues and eigenvectors. Using this algorithm, we investigate three-dimensional metamaterial composites consisting of positive and negative refractive index materials, and it is demonstrated that the finite-difference frequency-domain algorithm is applicable to analyze these metamaterial composites.

  14. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  15. Solution methods for one-dimensional viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, John M.; Simitses, George J.

    1987-01-01

    A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.

  16. Prospective randomized comparison of rotational angiography with three-dimensional reconstruction and computed tomography merged with electro-anatomical mapping: a two center atrial fibrillation ablation study.

    PubMed

    Anand, Rishi; Gorev, Maxim V; Poghosyan, Hermine; Pothier, Lindsay; Matkins, John; Kotler, Gregory; Moroz, Sarah; Armstrong, James; Nemtsov, Sergei V; Orlov, Michael V

    2016-08-01

    To compare the efficacy and accuracy of rotational angiography with three-dimensional reconstruction (3DATG) image merged with electro-anatomical mapping (EAM) vs. CT-EAM. A prospective, randomized, parallel, two-center study conducted in 36 patients (25 men, age 65 ± 10 years) undergoing AF ablation (33 % paroxysmal, 67 % persistent) guided by 3DATG (group 1) vs. CT (group 2) image fusion with EAM. 3DATG was performed on the Philips Allura Xper FD 10 system. Procedural characteristics including time, radiation exposure, outcome, and navigation accuracy were compared between two groups. There was no significant difference between the groups in total procedure duration or time spent for various procedural steps. Minor differences in procedural characteristics were present between two centers. Segmentation and fusion time for 3DATG or CT-EAM was short and similar between both centers. Accuracy of navigation guided by either method was high and did not depend on left atrial size. Maintenance of sinus rhythm between the two groups was no different up to 24 months of follow-up. This study did not find superiority of 3DATG-EAM image merge to guide AF ablation when compared to CT-EAM fusion. Both merging techniques result in similar navigation accuracy.

  17. Accuracy Analysis of a Dam Model from Drone Surveys

    PubMed Central

    Buffi, Giulia; Venturi, Sara

    2017-01-01

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185

  18. Accuracy Analysis of a Dam Model from Drone Surveys.

    PubMed

    Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio

    2017-08-03

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.

  19. Indoor positioning algorithm combined with angular vibration compensation and the trust region technique based on received signal strength-visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong

    2017-05-01

    Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.

  20. Three-Dimensional Virtual Sonographic Cystoscopy for Detection of Ureterocele in Duplicated Collecting Systems in Children.

    PubMed

    Nabavizadeh, Behnam; Mozafarpour, Sarah; Hosseini Sharifi, Seyed Hossein; Nabavizadeh, Reza; Abbasioun, Reza; Kajbafzadeh, Abdol-Mohammad

    2018-03-01

    Ureterocele is a sac-like dilatation of terminal ureter. Precise anatomic delineation is of utmost importance to proceed with the surgical plan, particularly in the ectopic subtype. However, the level of ureterocele extension is not always elucidated by the existing imaging modalities and even by conventional cystoscopy, which is considered as the gold standard for evaluation of ureterocele. This study aims to evaluate the accuracy of three-dimensional virtual sonographic cystoscopy (VSC) in the characterization of ureterocele in duplex collecting systems. Sixteen children with a mean age of 5.1 (standard deviation 1.96) years with transabdominal ultrasonography-proven duplex system and ureterocele were included. They underwent VSC by a single pediatric radiologist. All of them subsequently had conventional cystoscopy, and the results were compared in terms of ureterocele features including anatomy, number, size, location, and extension. Three-dimensional VSC was well tolerated in all cases without any complication. Image quality was suboptimal in 2 of 16 patients. Out of the remaining 14 cases, VSC had a high accuracy in characterization of the ureterocele features (93%). Only the extension of one ureterocele was not precisely detected by VSC. The results of this study suggest three-dimensional sonography as a promising noninvasive diagnostic modality in the evaluation of ectopic ureterocele in children. © 2017 by the American Institute of Ultrasound in Medicine.

  1. Evaluation of two-dimensional accelerometers to monitor behavior of beef calves after castration.

    PubMed

    White, Brad J; Coetzee, Johann F; Renter, David G; Babcock, Abram H; Thomson, Daniel U; Andresen, Daniel

    2008-08-01

    To determine the accuracy of accelerometers for measuring behavior changes in calves and to determine differences in beef calf behavior from before to after castration. 3 healthy Holstein calves and 12 healthy beef calves. 2-dimensional accelerometers were placed on 3 calves, and data were logged simultaneous to video recording of animal behavior. Resulting data were used to generate and validate predictive models to classify posture (standing or lying) and type of activity (standing in place, walking, eating, getting up, lying awake, or lying sleeping). The algorithms developed were used to conduct a prospective trial to compare calf behavior in the first 24 hours after castration (n = 6) with behavior of noncastrated control calves (6) and with presurgical readings from the same castrated calves. On the basis of the analysis of the 2-dimensional accelerometer signal, posture was classified with a high degree of accuracy (98.3%) and the specific activity was estimated with a reasonably low misclassification rate (23.5%). Use of the system to compare behavior after castration revealed that castrated calves spent a significantly larger amount of time standing (82.2%), compared with presurgical readings (46.2%). 2-dimensional accelerometers provided accurate classification of posture and reasonable classification of activity. Applying the system in a castration trial illustrated the usefulness of accelerometers for measuring behavioral changes in individual calves.

  2. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  3. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  4. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice-accretion cases tested, the aerodynamics is known to depend upon the small, three-dimensional features of the ice. These data show that the laser-scan and rapid-prototype manufacturing approach is capable of replicating these ice features within the reported accuracies of the laser-scan measurement and rapid-prototyping method; thus providing a new capability for high-fidelity ice-accretion documentation and artificial ice-shape fabrication for icing research.

  5. Evaluation and comparison of dimensional accuracy of newly introduced elastomeric impression material using 3D laser scanners: an in vitro study.

    PubMed

    Pandita, Amrita; Jain, Teerthesh; Yadav, Naveen S; Feroz, S M A; Pradeep; Diwedi, Akankasha

    2013-03-01

    Aim of the present study was to comparatively evaluate dimensional accuracy of newely introduced elastomeric impression material after repeated pours at different time intervals. In the present study a total of 20 (10 + 10) impressions of master model were made from vinyl polyether silicone and vinyl polysiloxane impression material. Each impression was repeatedly poured at 1, 24 hours and 14 days. Therefore, a total of 60 casts were obtained. Casts obtained were scanned with three-dimensional (3D) laser scanner and measurements were done. Vinyl polyether silicone produced overall undersized dies, with greatest change being 0.14% only after 14 days. Vinyl polysiloxane produced smaller dies after 1 and 24 hours and larger dies after 14 days, differing from master model by only 0.07% for the smallest die and to 0.02% for the largest die. All the deviations measured from the master model with both the impression materials were within a clinically acceptable range. In a typical fixed prosthodontic treatment accuracy of prosthesis is critical as it determines the success, failure and the prognosis of treatment including abutments. This is mainly dependent upon fit of prosthesis which in turn is dependent on dimensional accuracy of dies, poured from elastomeric impressions.

  6. Test of the FDTD accuracy in the analysis of the scattering resonances associated with high-Q whispering-gallery modes of a circular cylinder.

    PubMed

    Boriskin, Artem V; Boriskina, Svetlana V; Rolland, Anthony; Sauleau, Ronan; Nosich, Alexander I

    2008-05-01

    Our objective is the assessment of the accuracy of a conventional finite-difference time-domain (FDTD) code in the computation of the near- and far-field scattering characteristics of a circular dielectric cylinder. We excite the cylinder with an electric or magnetic line current and demonstrate the failure of the two-dimensional FDTD algorithm to accurately characterize the emission rate and the field patterns near high-Q whispering-gallery-mode resonances. This is proven by comparison with the exact series solutions. The computational errors in the emission rate are then studied at the resonances still detectable with FDTD, i.e., having Q-factors up to 10(3).

  7. A high-accuracy algorithm for solving nonlinear PDEs with high-order spatial derivatives in 1 + 1 dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jian Hua; Gooding, R.J.

    1994-06-01

    We propose an algorithm to solve a system of partial differential equations of the type u[sub t](x,t) = F(x, t, u, u[sub x], u[sub xx], u[sub xxx], u[sub xxxx]) in 1 + 1 dimensions using the method of lines with piecewise ninth-order Hermite polynomials, where u and F and N-dimensional vectors. Nonlinear boundary conditions are easily incorporated with this method. We demonstrate the accuracy of this method through comparisons of numerically determine solutions to the analytical ones. Then, we apply this algorithm to a complicated physical system involving nonlinear and nonlocal strain forces coupled to a thermal field. 4 refs.,more » 5 figs., 1 tab.« less

  8. Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1998-01-01

    This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

  9. Picometre and nanoradian heterodyne interferometry and its application in dilatometry and surface metrology

    NASA Astrophysics Data System (ADS)

    Schuldt, T.; Gohlke, M.; Kögel, H.; Spannagel, R.; Peters, A.; Johann, U.; Weise, D.; Braxmaier, C.

    2012-05-01

    A high-sensitivity heterodyne interferometer implementing differential wavefront sensing for tilt measurement was developed over the last few years. With this setup, using an aluminium breadboard and compact optical mounts with a beam height of 2 cm, noise levels less than 5 pm Hz-1/2 in translation and less than 10 nrad Hz-1/2 in tilt measurement, both for frequencies above 10-2 Hz, have been demonstrated. Here, a new, compact and ruggedized interferometer setup utilizing a baseplate made of Zerodur, a thermally and mechanically highly stable glass ceramic with a coefficient of thermal expansion (CTE) of 2 × 10-8 K-1, is presented. The optical components are fixed to the baseplate using a specifically developed, easy-to-handle, assembly-integration technology based on a space-qualified two-component epoxy. While developed as a prototype for future applications aboard satellite space missions (such as Laser Interferometer Space Antenna), the interferometer is used in laboratory experiments for dilatometry and surface metrology. A first dilatometer setup with a demonstrated accuracy of 10-7 K-1 in CTE measurement was realized. As it was seen that the accuracy is limited by the dimensional stability of the sample tube support, a new setup was developed utilizing Zerodur as structural material for the sample tube support. In another activity, the interferometer is used for characterization of high-quality mirror surfaces at the picometre level and for high-accuracy two-dimensional surface characterization in a prototype for industrial applications. In this paper, the corresponding designs, their realizations and first measurements of both applications in dilatometry and surface metrology are presented.

  10. Pulmonary tumor measurements from x-ray computed tomography in one, two, and three dimensions.

    PubMed

    Villemaire, Lauren; Owrangi, Amir M; Etemad-Rezai, Roya; Wilson, Laura; O'Riordan, Elaine; Keller, Harry; Driscoll, Brandon; Bauman, Glenn; Fenster, Aaron; Parraga, Grace

    2011-11-01

    We evaluated the accuracy and reproducibility of three-dimensional (3D) measurements of lung phantoms and patient tumors from x-ray computed tomography (CT) and compared these to one-dimensional (1D) and two-dimensional (2D) measurements. CT images of three spherical and three irregularly shaped tumor phantoms were evaluated by three observers who performed five repeated measurements. Additionally, three observers manually segmented 29 patient lung tumors five times each. Follow-up imaging was performed for 23 tumors and response criteria were compared. For a single subject, imaging was performed on nine occasions over 2 years to evaluate multidimensional tumor response. To evaluate measurement accuracy, we compared imaging measurements to ground truth using analysis of variance. For estimates of precision, intraobserver and interobserver coefficients of variation and intraclass correlations (ICC) were used. Linear regression and Pearson correlations were used to evaluate agreement and tumor response was descriptively compared. For spherical shaped phantoms, all measurements were highly accurate, but for irregularly shaped phantoms, only 3D measurements were in high agreement with ground truth measurements. All phantom and patient measurements showed high intra- and interobserver reproducibility (ICC >0.900). Over a 2-year period for a single patient, there was disagreement between tumor response classifications based on 3D measurements and those generated using 1D and 2D measurements. Tumor volume measurements were highly reproducible and accurate for irregular, spherical phantoms and patient tumors with nonuniform dimensions. Response classifications obtained from multidimensional measurements suggest that 3D measurements provide higher sensitivity to tumor response. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  11. Research on parallel load sharing principle of piezoelectric six-dimensional heavy force/torque sensor

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Ying-jun; Jia, Zhen-yuan; Zhang, Jun; Qian, Min

    2011-01-01

    In working process of huge heavy-load manipulators, such as the free forging machine, hydraulic die-forging press, forging manipulator, heavy grasping manipulator, large displacement manipulator, measurement of six-dimensional heavy force/torque and real-time force feedback of the operation interface are basis to realize coordinate operation control and force compliance control. It is also an effective way to raise the control accuracy and achieve highly efficient manufacturing. Facing to solve dynamic measurement problem on six-dimensional time-varying heavy load in extremely manufacturing process, the novel principle of parallel load sharing on six-dimensional heavy force/torque is put forward. The measuring principle of six-dimensional force sensor is analyzed, and the spatial model is built and decoupled. The load sharing ratios are analyzed and calculated in vertical and horizontal directions. The mapping relationship between six-dimensional heavy force/torque value to be measured and output force value is built. The finite element model of parallel piezoelectric six-dimensional heavy force/torque sensor is set up, and its static characteristics are analyzed by ANSYS software. The main parameters, which affect load sharing ratio, are analyzed. The experiments for load sharing with different diameters of parallel axis are designed. The results show that the six-dimensional heavy force/torque sensor has good linearity. Non-linearity errors are less than 1%. The parallel axis makes good effect of load sharing. The larger the diameter is, the better the load sharing effect is. The results of experiments are in accordance with the FEM analysis. The sensor has advantages of large measuring range, good linearity, high inherent frequency, and high rigidity. It can be widely used in extreme environments for real-time accurate measurement of six-dimensional time-varying huge loads on manipulators.

  12. Accuracy of a hexapod parallel robot kinematics based external fixator.

    PubMed

    Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer

    2015-12-01

    Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  14. A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture.

    PubMed

    Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang

    2018-01-01

    A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.

  15. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  16. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less

  17. Implicit preconditioned WENO scheme for steady viscous flow computation

    NASA Astrophysics Data System (ADS)

    Huang, Juan-Chen; Lin, Herng; Yang, Jaw-Yen

    2009-02-01

    A class of lower-upper symmetric Gauss-Seidel implicit weighted essentially nonoscillatory (WENO) schemes is developed for solving the preconditioned Navier-Stokes equations of primitive variables with Spalart-Allmaras one-equation turbulence model. The numerical flux of the present preconditioned WENO schemes consists of a first-order part and high-order part. For first-order part, we adopt the preconditioned Roe scheme and for the high-order part, we employ preconditioned WENO methods. For comparison purpose, a preconditioned TVD scheme is also given and tested. A time-derivative preconditioning algorithm is devised and a discriminant is devised for adjusting the preconditioning parameters at low Mach numbers and turning off the preconditioning at intermediate or high Mach numbers. The computations are performed for the two-dimensional lid driven cavity flow, low subsonic viscous flow over S809 airfoil, three-dimensional low speed viscous flow over 6:1 prolate spheroid, transonic flow over ONERA-M6 wing and hypersonic flow over HB-2 model. The solutions of the present algorithms are in good agreement with the experimental data. The application of the preconditioned WENO schemes to viscous flows at all speeds not only enhances the accuracy and robustness of resolving shock and discontinuities for supersonic flows, but also improves the accuracy of low Mach number flow with complicated smooth solution structures.

  18. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.

  19. Product Development and its Comparative Analysis by SLA, SLS and FDM Rapid Prototyping Processes

    NASA Astrophysics Data System (ADS)

    Choudhari, C. M.; Patil, V. D.

    2016-09-01

    To grab market and meeting deadlines has increased the scope of new methods in product design and development. Industries continuously strive to optimize the development cycles with high quality and cost efficient products to maintain market competitiveness. Thus the need of Rapid Prototyping Techniques (RPT) has started to play pivotal role in rapid product development cycle for complex product. Dimensional accuracy and surface finish are the corner stone of Rapid Prototyping (RP) especially if they are used for mould development. The paper deals with the development of part made with the help of Selective Laser Sintering (SLS), Stereo-lithography (SLA) and Fused Deposition Modelling (FDM) processes to benchmark and investigate on various parameters like material shrinkage rate, dimensional accuracy, time, cost and surface finish. This helps to conclude which processes can be proved to be effective and efficient in mould development. In this research work the emphasis was also given to the design stage of a product development to obtain an optimum design solution for an existing product.

  20. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    PubMed

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  1. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  2. Delineating Beach and Dune Morphology from Massive Terrestrial Laser Scanning Data Using the Generic Mapping Tools

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Wang, G.; Yan, B.; Kearns, T.

    2016-12-01

    Terrestrial laser scanning (TLS) techniques have been proven to be efficient tools to collect three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, the processing and presenting of massive TLS data is always a challenge for research when targeting a large area with high-resolution. This article introduces a workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas in May 2015 were used for the case study. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth DEMs, and quantifying beach and sand dune features (shoreline, cross-dune section, dune ridge, toe, and volume) are presented in this article. According to this investigation, the accuracy of the laser measurements (distance from the scanner to the targets) is within a couple of centimeters. However, the positional accuracy of TLS points with respect to a global coordinate system is about 5 cm, which is dominated by the accuracy of GPS solutions for obtaining the positions of the scanner and reflector. The accuracy of TLS-derived bare-earth DEM is primarily determined by the size of grid cells and roughness of the terrain surface for the case study. A DEM with grid cells of 4m x 1m (shoreline by cross-shore) provides a suitable spatial resolution and accuracy for deriving major beach and dune features.

  3. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    NASA Astrophysics Data System (ADS)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  4. Three-dimensional visual guidance improves the accuracy of calculating right ventricular volume with two-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Dorosz, Jennifer L.; Bolson, Edward L.; Waiss, Mary S.; Sheehan, Florence H.

    2003-01-01

    Three-dimensional guidance programs have been shown to increase the reproducibility of 2-dimensional (2D) left ventricular volume calculations, but these systems have not been tested in 2D measurements of the right ventricle. Using magnetic fields to identify the probe location, we developed a new 3-dimensional guidance system that displays the line of intersection, the plane of intersection, and the numeric angle of intersection between the current image plane and previously saved scout views. When used by both an experienced and an inexperienced sonographer, this guidance system increases the accuracy of the 2D right ventricular volume measurements using a monoplane pyramidal model. Furthermore, a reconstruction of the right ventricle, with a computed volume similar to the calculated 2D volume, can be displayed quickly by tracing a few anatomic structures on 2D scans.

  5. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  6. SEMICONDUCTOR INTEGRATED CIRCUITS: A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    NASA Astrophysics Data System (ADS)

    Jizhi, Liu; Xingbi, Chen

    2009-12-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.

  7. Impact of reduced near-field entrainment of overpressured volcanic jets on plume development

    USGS Publications Warehouse

    Saffaraval, Farhad; Solovitz, Stephen A.; Ogden, Darcy E.; Mastin, Larry G.

    2012-01-01

    Volcanic plumes are often studied using one-dimensional analytical models, which use an empirical entrainment ratio to close the equations. Although this ratio is typically treated as constant, its value near the vent is significantly reduced due to flow development and overpressured conditions. To improve the accuracy of these models, a series of experiments was performed using particle image velocimetry, a high-accuracy, full-field velocity measurement technique. Experiments considered a high-speed jet with Reynolds numbers up to 467,000 and exit pressures up to 2.93 times atmospheric. Exit gas densities were also varied from 0.18 to 1.4 times that of air. The measured velocity was integrated to determine entrainment directly. For jets with exit pressures near atmospheric, entrainment was approximately 30% less than the fully developed level at 20 diameters from the exit. At pressures nearly three times that of the atmosphere, entrainment was 60% less. These results were introduced into Plumeria, a one-dimensional plume model, to examine the impact of reduced entrainment. The maximum column height was only slightly modified, but the critical radius for collapse was significantly reduced, decreasing by nearly a factor of two at moderate eruptive pressures.

  8. Relative dosimetrical verification in high dose rate brachytherapy using two-dimensional detector array IMatriXX

    PubMed Central

    Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.

    2011-01-01

    For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562

  9. Design and experimental validation of novel 3D optical scanner with zoom lens unit

    NASA Astrophysics Data System (ADS)

    Huang, Jyun-Cheng; Liu, Chien-Sheng; Chiang, Pei-Ju; Hsu, Wei-Yan; Liu, Jian-Liang; Huang, Bai-Hao; Lin, Shao-Ru

    2017-10-01

    Optical scanners play a key role in many three-dimensional (3D) printing and CAD/CAM applications. However, existing optical scanners are generally designed to provide either a wide scanning area or a high 3D reconstruction accuracy from a lens with a fixed focal length. In the former case, the scanning area is increased at the expense of the reconstruction accuracy, while in the latter case, the reconstruction performance is improved at the expense of a more limited scanning range. In other words, existing optical scanners compromise between the scanning area and the reconstruction accuracy. Accordingly, the present study proposes a new scanning system including a zoom-lens unit, which combines both a wide scanning area and a high 3D reconstruction accuracy. In the proposed approach, the object is scanned initially under a suitable low-magnification setting for the object size (setting 1), resulting in a wide scanning area but a poor reconstruction resolution in complicated regions of the object. The complicated regions of the object are then rescanned under a high-magnification setting (setting 2) in order to improve the accuracy of the original reconstruction results. Finally, the models reconstructed after each scanning pass are combined to obtain the final reconstructed 3D shape of the object. The feasibility of the proposed method is demonstrated experimentally using a laboratory-built prototype. It is shown that the scanner has a high reconstruction accuracy over a large scanning area. In other words, the proposed optical scanner has significant potential for 3D engineering applications.

  10. Behavior analysis of video object in complicated background

    NASA Astrophysics Data System (ADS)

    Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang

    2016-10-01

    This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.

  11. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  12. Scaling between Wind Tunnels-Results Accuracy in Two-Dimensional Testing

    NASA Astrophysics Data System (ADS)

    Rasuo, Bosko

    The establishment of exact two-dimensional flow conditions in wind tunnels is a very difficult problem. This has been evident for wind tunnels of all types and scales. In this paper, the principal factors that influence the accuracy of two-dimensional wind tunnel test results are analyzed. The influences of the Reynolds number, Mach number and wall interference with reference to solid and flow blockage (blockage of wake) as well as the influence of side-wall boundary layer control are analyzed. Interesting results are brought to light regarding the Reynolds number effects of the test model versus the Reynolds number effects of the facility in subsonic and transonic flow.

  13. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  14. Research on online 3D laser scanner dimensional measurement system for heavy high-temperature forgings

    NASA Astrophysics Data System (ADS)

    Zhu, Jingguo; Li, Menglin; Jiang, Yan; Xie, Tianpeng; Li, Feng; Jiang, Chenghao; Liu, Ruqing; Meng, Zhe

    2017-10-01

    Online 3-D laser-scanner is a non-contact measurement system with high speed, high precision and easy operation, which can be used to measure heavy and high-temperature forgings. But the current online laser measurement system is mainly a mobile light indicator, which can only be used in the limited environment and lacks the capability of 3-D accurate measurement. This paper mainly introduces the structure of the online high-speed real-time 3-D measurement for heavy high-temperature forgings of Academy of Opto-Electronics (AOE), Chinese Academy of Sciences. Combining TOF pulse distance measurement with hybrid scan mode, the system can scan and acquire point cloud data of an area of 20m×10m with a 75°×40° field of view at the distance of 20m. The entire scanning time is less than 5 seconds with an accuracy of 8mm, which can meet the online dimensional measurement requirements of heavy high-temperature forgings.

  15. Diagnosing Autism Spectrum Disorder from Brain Resting-State Functional Connectivity Patterns Using a Deep Neural Network with a Novel Feature Selection Method

    PubMed Central

    Guo, Xinyu; Dominick, Kelli C.; Minai, Ali A.; Li, Hailong; Erickson, Craig A.; Lu, Long J.

    2017-01-01

    The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t-test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t-test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided. PMID:28871217

  16. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  17. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  18. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  19. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  20. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  1. PAIR Comparison between Two Within-Group Conditions of Resting-State fMRI Improves Classification Accuracy

    PubMed Central

    Zhou, Zhen; Wang, Jian-Bao; Zang, Yu-Feng; Pan, Gang

    2018-01-01

    Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets (n = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired t-tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC—EO) maps of ALFF from half of subjects (n = 15 for dataset-1, n = 23 for dataset-2) and obtained EO—EC maps from the other half (n = 16 for dataset-1, n = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01–0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01–0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01–0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies. PMID:29375288

  2. A unique case of "double-orifice aortic valve"-comprehensive assessment by 2-, 3-dimensional, and color Doppler echocardiography.

    PubMed

    Stirrup, James E; Cowburn, Peter J; Pousios, Dimitrios; Ohri, Sunil K; Shah, Benoy N

    2016-09-01

    Transesophageal echocardiography (TEE) is a powerful imaging tool for the comprehensive assessment of valvular structure and function. TEE may be of added benefit when anatomy is difficult to delineate accurately by transthoracic echocardiography. In this article, we present 2-, 3-dimensional, and color Doppler TEE images from a male patient with aortic stenosis. A highly unusual and complex pattern of valvular calcification created a functionally "double-orifice" valve. Such an abnormality may have implications for the accuracy of continuous-wave Doppler echocardiography, which assumes a single orifice valve in native aortic valves. © 2016, Wiley Periodicals, Inc.

  3. Three Dimensional Speckle Imaging Employing a Frequency-Locked Tunable Diode Laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, Bret D.; Bernacki, Bruce E.; Schiffern, John T.

    2015-09-01

    We describe a high accuracy frequency stepping method for a tunable diode laser to improve a three dimensional (3D) imaging approach based upon interferometric speckle imaging. The approach, modeled after Takeda, exploits tuning an illumination laser in frequency as speckle interferograms of the object (specklegrams) are acquired at each frequency in a Michelson interferometer. The resulting 3D hypercube of specklegrams encode spatial information in the x-y plane of each image with laser tuning arrayed along its z-axis. We present laboratory data of before and after results showing enhanced 3D imaging resulting from precise laser frequency control.

  4. Calculation of two dimensional vortex/surface interference using panel methods

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1980-01-01

    The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.

  5. Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1991-01-01

    A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.

  6. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  7. Recent advances in laser triangulation-based measurement of airfoil surfaces

    NASA Astrophysics Data System (ADS)

    Hageniers, Omer L.

    1995-01-01

    The measurement of aircraft jet engine turbine and compressor blades requires a high degree of accuracy. This paper will address the development and performance attributes of a noncontact electro-optical gaging system specifically designed to meet the airfoil dimensional measurement requirements inherent in turbine and compressor blade manufacture and repair. The system described consists of the following key components: a high accuracy, dual channel, laser based optical sensor, a four degree of freedom mechanical manipulator system and a computer based operator interface. Measurement modes of the system include point by point data gathering at rates up to 3 points per second and an 'on-the-fly' mode where points can be gathered at data rates up to 20 points per second at surface scanning speeds of up to 1 inch per second. Overall system accuracy is +/- 0.0005 inches in a configuration that is useable in the blade manufacturing area. The systems ability to input design data from CAD data bases and output measurement data in a CAD compatible data format is discussed.

  8. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography

    PubMed Central

    2016-01-01

    PURPOSE The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). MATERIALS AND METHODS Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). RESULTS For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast (P<.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions (P<.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods (P<.05). CONCLUSION The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions. PMID:27826388

  9. Investigation of the effects of storage time on the dimensional accuracy of impression materials using cone beam computed tomography.

    PubMed

    Alkurt, Murat; Yeşıl Duymus, Zeynep; Dedeoglu, Numan

    2016-10-01

    The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast ( P <.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions ( P <.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods ( P <.05). The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions.

  10. Direct Linear Transformation Method for Three-Dimensional Cinematography

    ERIC Educational Resources Information Center

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  11. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.

  12. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  13. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  14. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  15. High-order ENO schemes applied to two- and three-dimensional compressible flow

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley

    1991-01-01

    High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.

  16. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    NASA Astrophysics Data System (ADS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  17. Stirling Analysis Comparison of Commercial vs. High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  18. Stirling Analysis Comparison of Commercial Versus High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  19. Dimensional accuracy and surface property of titanium casting using gypsum-bonded alumina investment.

    PubMed

    Yan, Min; Takahashi, Hidekazu; Nishimura, Fumio

    2004-12-01

    The aim of the present study was to evaluate the dimensional accuracy and surface property of titanium casting obtained using a gypsum-bonded alumina investment. The experimental gypsum-bonded alumina investment with 20 mass% gypsum content mixed with 2 mass% potassium sulfate was used for five cp titanium castings and three Cu-Zn alloy castings. The accuracy, surface roughness (Ra), and reaction layer thickness of these castings were investigated. The accuracy of the castings obtained from the experimental investment ranged from -0.04 to 0.23%, while surface roughness (Ra) ranged from 7.6 to 10.3microm. A reaction layer of about 150 microm thickness under the titanium casting surface was observed. These results suggested that the titanium casting obtained using the experimental investment was acceptable. Although the reaction layer was thin, surface roughness should be improved.

  20. A Systematic Review to Uncover a Universal Protocol for Accuracy Assessment of 3-Dimensional Virtually Planned Orthognathic Surgery.

    PubMed

    Gaber, Ramy M; Shaheen, Eman; Falter, Bart; Araya, Sebastian; Politis, Constantinus; Swennen, Gwen R J; Jacobs, Reinhilde

    2017-11-01

    The aim of this study was to systematically review methods used for assessing the accuracy of 3-dimensional virtually planned orthognathic surgery in an attempt to reach an objective assessment protocol that could be universally used. A systematic review of the currently available literature, published until September 12, 2016, was conducted using PubMed as the primary search engine. We performed secondary searches using the Cochrane Database, clinical trial registries, Google Scholar, and Embase, as well as a bibliography search. Included articles were required to have stated clearly that 3-dimensional virtual planning was used and accuracy assessment performed, along with validation of the planning and/or assessment method. Descriptive statistics and quality assessment of included articles were performed. The initial search yielded 1,461 studies. Only 7 studies were included in our review. An important variability was found regarding methods used for 1) accuracy assessment of virtually planned orthognathic surgery or 2) validation of the tools used. Included studies were of moderate quality; reviewers' agreement regarding quality was calculated to be 0.5 using the Cohen κ test. On the basis of the findings of this review, it is evident that the literature lacks consensus regarding accuracy assessment. Hence, a protocol is suggested for accuracy assessment of virtually planned orthognathic surgery with the lowest margin of error. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  2. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    PubMed

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  3. Application of Template Matching for Improving Classification of Urban Railroad Point Clouds

    PubMed Central

    Arastounia, Mostafa; Oude Elberink, Sander

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452

  4. Flow simulations about steady-complex and unsteady moving configurations using structured-overlapped and unstructured grids

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1995-01-01

    The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.

  5. Combined Loadings and Cross-Dimensional Loadings Timeliness of Presentation of Financial Statements of Local Government

    NASA Astrophysics Data System (ADS)

    Muda, I.; Dharsuky, A.; Siregar, H. S.; Sadalia, I.

    2017-03-01

    This study examines the pattern of readiness dimensional accuracy of financial statements of local government in North Sumatra with a routine pattern of two (2) months after the fiscal year ends and patterns of at least 3 (three) months after the fiscal year ends. This type of research is explanatory survey with quantitative methods. The population and the sample used is of local government officials serving local government financial reports. Combined Analysis And Cross-Loadings Loadings are used with statistical tools WarpPLS. The results showed that there was a pattern that varies above dimensional accuracy of the financial statements of local government in North Sumatra.

  6. Conceptual study of Earth observation missions with a space-borne laser scanner

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takashi; Sato, Yohei; Yamakawa, Shiro

    2017-11-01

    The Japan Aerospace Exploration Agency (JAXA) has started a conceptual study of earth observation missions with a space-borne laser scanner (GLS, as Global Laser Scanner). Laser scanners are systems which transmit intense pulsed laser light to the ground from an airplane or a satellite, receive the scattered light, and measure the distance to the surface from the round-trip delay time of the pulse. With scanning mechanisms, GLS can obtain high-accuracy three-dimensional (3D) information from all over the world. High-accuracy 3D information is quite useful in various areas. Currently, following applications are considered. 1. Observation of tree heights to estimate the biomass quantity. 2. Making the global elevation map with high resolution. 3. Observation of ice-sheets. This paper aims at reporting the present state of our conceptual study of the GLS. A prospective performance of the GLS for earth observation missions mentioned above.

  7. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  8. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  9. An experimental apparatus for diffraction-limited soft x-ray nano-focusing

    NASA Astrophysics Data System (ADS)

    Merthe, Daniel J.; Goldberg, Kenneth A.; Yashchuk, Valeriy V.; Yuan, Sheng; McKinney, Wayne R.; Celestre, Richard; Mochi, Iacopo; Macdougall, James; Morrison, Gregory Y.; Rakawa, Senajith B.; Anderson, Erik; Smith, Brian V.; Domning, Edward E.; Warwick, Tony; Padmore, Howard

    2011-09-01

    Realizing the experimental potential of high-brightness, next generation synchrotron and free-electron laser light sources requires the development of reflecting x-ray optics capable of wavefront preservation and high-resolution nano-focusing. At the Advanced Light Source (ALS) beamline 5.3.1, we are developing broadly applicable, high-accuracy, in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for diffraction-limited Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of wavefront-sensing tests with increasing accuracy and sensitivity, including scanning-slit Hartmann tests, grating-based lateral shearing interferometry, and quantitative knife-edge testing. We describe the original experimental techniques and alignment methodology that have enabled us to optimally set a bendable KB mirror to achieve a focused, FWHM spot size of 150 nm, with 1 nm (1.24 keV) photons at 3.7 mrad numerical aperture. The predictions of wavefront measurement are confirmed by the knife-edge testing. The side-profiled elliptically bent mirror used in these one-dimensional focusing experiments was originally designed for a much different glancing angle and conjugate distances. Visible-light long-trace profilometry was used to pre-align the mirror before installation at the beamline. This work demonstrates that high-accuracy, at-wavelength wavefront-slope feedback can be used to optimize the pitch, roll, and mirror-bending forces in situ, using procedures that are deterministic and repeatable.

  10. High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.

  11. High order finite volume WENO schemes for the Euler equations under gravitational fields

    NASA Astrophysics Data System (ADS)

    Li, Gang; Xing, Yulong

    2016-07-01

    Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.

  12. Transient and 2-Dimensional Shear-Wave Elastography Provide Comparable Assessment of Alcoholic Liver Fibrosis and Cirrhosis.

    PubMed

    Thiele, Maja; Detlefsen, Sönke; Sevelsted Møller, Linda; Madsen, Bjørn Stæhr; Fuglsang Hansen, Janne; Fialla, Annette Dam; Trebicka, Jonel; Krag, Aleksander

    2016-01-01

    Alcohol abuse causes half of all deaths from cirrhosis in the West, but few tools are available for noninvasive diagnosis of alcoholic liver disease. We evaluated 2 elastography techniques for diagnosis of alcoholic fibrosis and cirrhosis; liver biopsy with Ishak score and collagen-proportionate area were used as reference. We performed a prospective study of 199 consecutive patients with ongoing or prior alcohol abuse, but without known liver disease. One group of patients had a high pretest probability of cirrhosis because they were identified at hospital liver clinics (in Southern Denmark). The second, lower-risk group, was recruited from municipal alcohol rehabilitation centers and the Danish national public health portal. All subjects underwent same-day transient elastography (FibroScan), 2-dimensional shear wave elastography (Supersonic Aixplorer), and liver biopsy after an overnight fast. Transient elastography and 2-dimensional shear wave elastography identified subjects in each group with significant fibrosis (Ishak score ≥3) and cirrhosis (Ishak score ≥5) with high accuracy (area under the curve ≥0.92). There was no difference in diagnostic accuracy between techniques. The cutoff values for optimal identification of significant fibrosis by transient elastography and 2-dimensional shear wave elastography were 9.6 kPa and 10.2 kPa, and for cirrhosis 19.7 kPa and 16.4 kPa. Negative predictive values were high for both groups, but the positive predictive value for cirrhosis was >66% in the high-risk group vs approximately 50% in the low-risk group. Evidence of alcohol-induced damage to cholangiocytes, but not ongoing alcohol abuse, affected liver stiffness. The collagen-proportionate area correlated with Ishak grades and accurately identified individuals with significant fibrosis and cirrhosis. In a prospective study of individuals at risk for liver fibrosis due to alcohol consumption, we found elastography to be an excellent tool for diagnosing liver fibrosis and for excluding (ruling out rather than ruling in) cirrhosis. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.

  13. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  14. Kinematic and kinetic analysis of overhand, sidearm and underhand lacrosse shot techniques.

    PubMed

    Macaulay, Charles A J; Katz, Larry; Stergiou, Pro; Stefanyshyn, Darren; Tomaghelli, Luciano

    2017-12-01

    Lacrosse requires the coordinated performance of many complex skills. One of these skills is shooting on the opponents' net using one of three techniques: overhand, sidearm or underhand. The purpose of this study was to (i) determine which technique generated the highest ball velocity and greatest shot accuracy and (ii) identify kinematic and kinetic variables that contribute to a high velocity and high accuracy shot. Twelve elite male lacrosse players participated in this study. Kinematic data were sampled at 250 Hz, while two-dimensional force plates collected ground reaction force data (1000 Hz). Statistical analysis showed significantly greater ball velocity for the sidearm technique than overhand (P < 0.001) and underhand (P < 0.001) techniques. No statistical difference was found for shot accuracy (P > 0.05). Kinematic and kinetic variables were not significantly correlated to shot accuracy or velocity across all shot types; however, when analysed independently, the lead foot horizontal impulse showed a negative correlation with underhand ball velocity (P = 0.042). This study identifies the technique with the highest ball velocity, defines kinematic and kinetic predictors related to ball velocity and provides information to coaches and athletes concerned with improving lacrosse shot performance.

  15. Three-dimensional accuracy of plastic transfer impression copings for three implant systems.

    PubMed

    Teo, Juin Wei; Tan, Keson B; Nicholls, Jack I; Wong, Keng Mun; Uy, Joanne

    2014-01-01

    The purpose of this study was to compare the three-dimensional accuracy of indirect plastic impression copings and direct implant-level impression copings from three implant systems (Nobel Biocare [NB], Biomet 3i [3i], and Straumann [STR]) at three interimplant buccolingual angulations (0, 8, and 15 degrees). Two-implant master models were used to simulate a three-unit implant fixed partial denture. Test models were made from Impregum impressions using direct implant-level impression copings (DR). Abutments were then connected to the master models for impressions using the plastic impression copings (INDR) at three different angulations for a total of 18 test groups (n = 5 in each group). A coordinate measuring machine was used to measure linear distortions, three-dimensional (3D) distortions, angular distortions, and absolute angular distortions between the master and test models. Three-way analysis of variance showed that the implant system had a significant effect on 3D distortions and absolute angular distortions in the x- and y-axes. Interimplant angulation had a significant effect on 3D distortions and absolute angular distortions in the y-axis. Impression technique had a significant effect on absolute angular distortions in the y-axis. With DR, the NB and 3i systems were not significantly different. With INDR, 3i appeared to have less distortion than the other systems. Interimplant angulations did not significantly affect the accuracy of NBDR, 3iINDR, and STRINDR. The accuracy of INDR and DR was comparable at all interimplant angulations for 3i and STR. For NB, INDR was comparable to DR at 0 and 8 degrees but was less accurate at 15 degrees. Three-dimensional accuracy of implant impressions varied with implant system, interimplant angulation, and impression technique.

  16. A three-dimensional laser vibration measurement technology realized on five laser beam and its calibration

    NASA Astrophysics Data System (ADS)

    Li, Lu-Ke; Zhang, Shen-Feng

    2018-03-01

    Put forward a kind of three-dimensional vibration information technology of vibrating object by the mean of five laser beam of He-Ne laser, and with the help of three-way sensor, measure the three-dimensional laser vibration developed by above mentioned technology. The technology based on the Doppler principle of interference and signal demodulation technology, get the vibration information of the object, through the algorithm processing, extract the three-dimensional vibration information of space objects, and can achieve the function of angle calibration of five beam in the space, which avoid the effects of the mechanical installation error, greatly improve the accuracy of measurement. With the help of a & B K4527 contact three axis sensor, measure and calibrate three-dimensional laser vibrometer, which ensure the accuracy of the measurement data. Summarize the advantages and disadvantages of contact and non-contact sensor, and analysis the future development trends of the sensor industry.

  17. Autofocus algorithm using one-dimensional Fourier transform and Pearson correlation

    NASA Astrophysics Data System (ADS)

    Bueno Mario, A.; Alvarez-Borrego, Josue; Acho, L.

    2004-10-01

    A new autofocus algorithm based on one-dimensional Fourier transform and Pearson correlation for Z automatized microscope is proposed. Our goal is to determine in fast response time and accuracy, the best focused plane through an algorithm. We capture in bright and dark field several images set at different Z distances from biological organism sample. The algorithm uses the one-dimensional Fourier transform to obtain the image frequency content of a vectors pattern previously defined comparing the Pearson correlation of these frequency vectors versus the reference image frequency vector, the most out of focus image, we find the best focusing. Experimental results showed the algorithm has fast response time and accuracy in getting the best focus plane from captured images. In conclusions, the algorithm can be implemented in real time systems due fast response time, accuracy and robustness. The algorithm can be used to get focused images in bright and dark field and it can be extended to include fusion techniques to construct multifocus final images beyond of this paper.

  18. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  19. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  20. Rotator cuff tear shape characterization: a comparison of two-dimensional imaging and three-dimensional magnetic resonance reconstructions.

    PubMed

    Gyftopoulos, Soterios; Beltran, Luis S; Gibbs, Kevin; Jazrawi, Laith; Berman, Phillip; Babb, James; Meislin, Robert

    2016-01-01

    The purpose of this study was to see if 3-dimensional (3D) magnetic resonance imaging (MRI) could improve our understanding of rotator cuff tendon tear shapes. We believed that 3D MRI would be more accurate than two-dimensional (2D) MRI for classifying tear shapes. We performed a retrospective review of MRI studies of patients with arthroscopically proven full-thickness rotator cuff tears. Two orthopedic surgeons reviewed the information for each case, including scope images, and characterized the shape of the cuff tear into crescent, longitudinal, U- or L-shaped longitudinal, and massive type. Two musculoskeletal radiologists reviewed the corresponding MRI studies independently and blind to the arthroscopic findings and characterized the shape on the basis of the tear's retraction and size using 2D MRI. The 3D reconstructions of each cuff tear were reviewed by each radiologist to characterize the shape. Statistical analysis included 95% confidence intervals and intraclass correlation coefficients. The study reviewed 34 patients. The accuracy for differentiating between crescent-shaped, longitudinal, and massive tears using measurements on 2D MRI was 70.6% for reader 1 and 67.6% for reader 2. The accuracy for tear shape characterization into crescent and longitudinal U- or L-shaped using 3D MRI was 97.1% for reader 1 and 82.4% for reader 2. When further characterizing the longitudinal tears as massive or not using 3D MRI, both readers had an accuracy of 76.9% (10 of 13). The overall accuracy of 3D MRI was 82.4% (56 of 68), significantly different (P = .021) from 2D MRI accuracy (64.7%). Our study has demonstrated that 3D MR reconstructions of the rotator cuff improve the accuracy of characterizing rotator cuff tear shapes compared with current 2D MRI-based techniques. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  1. Effect of dental technician disparities on the 3-dimensional accuracy of definitive casts.

    PubMed

    Emir, Faruk; Piskin, Bulent; Sipahi, Cumhur

    2017-03-01

    Studies that evaluated the effect of dental technician disparities on the accuracy of presectioned and postsectioned definitive casts are lacking. The purpose of this in vitro study was to evaluate the accuracy of presectioned and postsectioned definitive casts fabricated by different dental technicians by using a 3-dimensional computer-aided measurement method. An arch-shaped metal master model consisting of 5 abutments resembling prepared mandibular incisors, canines, and first molars and with a 6-degree total angle of convergence was designed and fabricated by computer-aided design and computer-aided manufacturing (CAD-CAM) technology. Complete arch impressions were made (N=110) from the master model, using polyvinyl siloxane (PVS) and delivered to 11 dental technicians. Each technician fabricated 10 definitive casts with dental stone, and the obtained casts were numbered. All casts were sectioned, and removable dies were obtained. The master model and the presectioned and postsectioned definitive casts were digitized with an extraoral scanner, and the virtual master model and virtual presectioned and postsectioned definitive casts were obtained. All definitive casts were compared with the master model by using computer-aided measurements, and the 3-dimensional accuracy of the definitive casts was determined with best fit alignment and represented in color-coded maps. Differences were analyzed using univariate analyses of variance, and the Tukey honest significant differences post hoc tests were used for multiple comparisons (α=.05). The accuracy of presectioned and postsectioned definitive casts was significantly affected by dental technician disparities (P<.001). The largest dimensional changes were detected in the anterior abutments of both of the definitive casts. The changes mostly occurred in the mesiodistal dimension (P<.001). Within the limitations of this in vitro study, the accuracy of presectioned and postsectioned definitive casts is susceptible to dental technician differences. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture

    PubMed Central

    Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang

    2018-01-01

    A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394

  3. Materials for interocclusal records and their ability to reproduce a 3-dimensional jaw relationship.

    PubMed

    Ockert-Eriksson, G; Eriksson, A; Lockowandt, P; Eriksson, O

    2000-01-01

    The purpose of this study was to determine if accuracy and dimensional stability of vinyl polysiloxanes and irreversible hydrocolloids stabilized by a tray used for fixed prosthodontics, removable partial, and complete denture cases are comparable to those of waxes and record rims and if storage time (24 hours or 6 days) affects dimensional stability of the tested materials. Two waxes, two record rims, three vinyl polysiloxanes, and one irreversible hydrocolloid (alginate) were examined. Three pairs of master casts with measuring steel rods were mounted on an articulator (initial position). Five records were made of each material, and the upper cast was remounted after 24 hours or 6 days so that deviations from the initial position could be measured. Vinyl polysiloxanes reinforced by a stabilization tray were the most accurate materials able to reproduce a settled interocclusal position. Mounting casts (fixed prosthodontics cases) without records gave accuracy similar to wax records. Record rims used for removable partial and complete denture cases produced lesser accuracy than vinyl polysiloxanes and irreversible hydrocolloid stabilized by a tray. Accuracy was not significantly affected by storage time. The results show that accuracy of vinyl polysiloxanes and irreversible hydrocolloids reinforced by a tray is superior to that of record rims with regard to the complete denture case and is among the most accurate with regard to the removable partial denture case. For fixed prosthodontics, however, reinforcement is unnecessary.

  4. A Biomechanical Modeling Guided CBCT Estimation Technique

    PubMed Central

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-01-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866

  5. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator

    NASA Astrophysics Data System (ADS)

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  6. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator.

    PubMed

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  7. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  8. A resolution measure for three-dimensional microscopy

    PubMed Central

    Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2009-01-01

    A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040

  9. Numerical solution of the Black-Scholes equation using cubic spline wavelets

    NASA Astrophysics Data System (ADS)

    Černá, Dana

    2016-12-01

    The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.

  10. Sex determination from the mandibular ramus flexure of Koreans by discrimination function analysis using three-dimensional mandible models.

    PubMed

    Lin, Chenghe; Jiao, Benzheng; Liu, Shanshan; Guan, Feng; Chung, Nak-Eun; Han, Seung-Ho; Lee, U-Young

    2014-03-01

    It has been known that mandible ramus flexure is an important morphologic trait for sex determination. However, it will be unavailable when mandible is incomplete or fragmented. Therefore, the anthropometric analysis on incomplete or fragmented mandible becomes more important. The aim of this study is to investigate the sex-discriminant potential of mandible ramus flexure on the Korean three-dimensional (3D) mandible models with anthropometric analysis. The sample consists of 240 three dimensional mandibular models obtained from Korean population (M:F; 120:120, mean age 46.2 y), collected by The Catholic Institute for Applied Anatomy, The Catholic University of Korea. Anthropometric information about 11 metric was taken with Mimics, anthropometry libraries toolkit. These parameters were subjected to different discriminant function analyses using SPSS 17.0. Univariate analyses showed that the resubstitution accuracies for sex determination range from 50.4 to 77.1%. Mandibular flexure upper border (MFUB), maximum ramus vertical height (MRVH), and upper ramus vertical height (URVH) expressed the greatest dimorphism, 72.1 to 77.1%. Bivariate analyses indicated that the combination of MFUB and MRVH hold even higher resubstitution accuracy of 81.7%. Furthermore, the direct and stepwise discriminant analyses with the variables on the upper ramus above flexure could predict sex in 83.3 and 85.0%, respectively. When all variables of mandibular ramus flexure were input in stepwise discriminant analysis, the resubstitution accuracy arrived as high as 88.8%. Therefore, we concluded that the upper ramus above flexure hold the larger potentials than the mandibular ramus flexure itself to predict sexes, and that the equations in bivariate and multivariate analysis from our study will be helpful for sex determination on Korean population in forensic science and law. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Flame Kernel Interactions in a Turbulent Environment

    DTIC Science & Technology

    2001-08-01

    contours ranging from 1 ( fully burned) at the centre to 0 (unburned) on the outer contour. In each case the flames can clearly be seen to propagate outwards...called SENGA. The code solves a fully compressible reacting flow in three dimensions. High accuracy numerical schemes have been employed which are...Finally, results are presented and discussed for simulations with different initial non-dimensional turbulence intensities ranging from 5 to 23. 1

  12. Application of 3D Laser Scanning Technology in Inspection and Dynamic Reserves Detection of Open-Pit Mine

    NASA Astrophysics Data System (ADS)

    Hu, Zhumin; Wei, Shiyu; Jiang, Jun

    2017-10-01

    The traditional open-pit mine mining rights verification and dynamic reserve detection means rely on the total station and RTK to collect the results of the turning point coordinates of mining surface contours. It resulted in obtaining the results of low precision and large error in the means that is limited by the traditional measurement equipment accuracy and measurement methods. The three-dimensional scanning technology can obtain the three-dimensional coordinate data of the surface of the measured object in a large area at high resolution. This paper expounds the commonly used application of 3D scanning technology in the inspection and dynamic reserve detection of open mine mining rights.

  13. Measurement of two-dimensional thickness of micro-patterned thin film based on image restoration in a spectroscopic imaging reflectometer.

    PubMed

    Kim, Min-Gab; Kim, Jin-Yong

    2018-05-01

    In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.

  14. Capsule Ablator Inflight Performance Measurements Via Streaked Radiography Of ICF Implosions On The NIF*

    NASA Astrophysics Data System (ADS)

    Dewald, E. L.; Tommasini, R.; Mackinnon, A.; MacPhee, A.; Meezan, N.; Olson, R.; Hicks, D.; LePape, S.; Izumi, N.; Fournier, K.; Barrios, M. A.; Ross, S.; Pak, A.; Döppner, T.; Kalantar, D.; Opachich, K.; Rygg, R.; Bradley, D.; Bell, P.; Hamza, A.; Dzenitis, B.; Landen, O. L.; MacGowan, B.; LaFortune, K.; Widmayer, C.; Van Wonterghem, B.; Kilkenny, J.; Edwards, M. J.; Atherton, J.; Moses, E. I.

    2016-03-01

    Streaked 1-dimensional (slit imaging) radiography of 1.1 mm radius capsules in ignition hohlraums was recently introduced on the National Ignition Facility (NIF) and gives an inflight continuous record of capsule ablator implosion velocities, shell thickness and remaining mass in the last 3-5 ns before peak implosion time. The high quality data delivers good accuracy in implosion metrics that meets our requirements for ignition and agrees with recently introduced 2-dimensional pinhole radiography. Calculations match measured trajectory across various capsule designs and laser drives when the peak laser power is reduced by 20%. Furthermore, calculations matching measured trajectories give also good agreement in ablator shell thickness and remaining mass.

  15. High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

  16. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    NASA Astrophysics Data System (ADS)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  17. Implicit Total Variation Diminishing (TVD) schemes for steady-state calculations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Warming, R. F.; Harten, A.

    1983-01-01

    The application of a new implicit unconditionally stable high resolution total variation diminishing (TVD) scheme to steady state calculations. It is a member of a one parameter family of explicit and implicit second order accurate schemes developed by Harten for the computation of weak solutions of hyperbolic conservation laws. This scheme is guaranteed not to generate spurious oscillations for a nonlinear scalar equation and a constant coefficient system. Numerical experiments show that this scheme not only has a rapid convergence rate, but also generates a highly resolved approximation to the steady state solution. A detailed implementation of the implicit scheme for the one and two dimensional compressible inviscid equations of gas dynamics is presented. Some numerical computations of one and two dimensional fluid flows containing shocks demonstrate the efficiency and accuracy of this new scheme.

  18. The development of an explicit thermochemical nonequilibrium algorithm and its application to compute three dimensional AFE flowfields

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    This study presents a three-dimensional explicit, finite-difference, shock-capturing numerical algorithm applied to viscous hypersonic flows in thermochemical nonequilibrium. The algorithm employs a two-temperature physical model. Equations governing the finite-rate chemical reactions are fully-coupled to the gas dynamic equations using a novel coupling technique. The new coupling method maintains stability in the explicit, finite-rate formulation while allowing relatively large global time steps. The code uses flux-vector accuracy. Comparisons with experimental data and other numerical computations verify the accuracy of the present method. The code is used to compute the three-dimensional flowfield over the Aeroassist Flight Experiment (AFE) vehicle at one of its trajectory points.

  19. Rapid transfer alignment of an inertial navigation system using a marginal stochastic integration filter

    NASA Astrophysics Data System (ADS)

    Zhou, Dapeng; Guo, Lei

    2018-01-01

    This study aims to address the rapid transfer alignment (RTA) issue of an inertial navigation system with large misalignment angles. The strong nonlinearity and high dimensionality of the system model pose a significant challenge to the estimation of the misalignment angles. In this paper, a 15-dimensional nonlinear model for RTA has been exploited, and it is shown that the functions for the model description exhibit a conditionally linear substructure. Then, a modified stochastic integration filter (SIF) called marginal SIF (MSIF) is developed to incorporate into the nonlinear model, where the number of sample points is significantly reduced but the estimation accuracy of SIF is retained. Comparisons between the MSIF-based RTA and the previously well-known methodologies are carried out through numerical simulations and a van test. The results demonstrate that the newly proposed method has an obvious accuracy advantage over the extended Kalman filter, the unscented Kalman filter and the marginal unscented Kalman filter. Further, the MSIF achieves a comparable performance to SIF, but with a significantly lower computation load.

  20. Application of Dynamic Analysis in Semi-Analytical Finite Element Method

    PubMed Central

    Oeser, Markus

    2017-01-01

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement’s state. PMID:28867813

  1. FPGA-Based Smart Sensor for Online Displacement Measurements Using a Heterodyne Interferometer

    PubMed Central

    Vera-Salas, Luis Alberto; Moreno-Tapia, Sandra Veronica; Garcia-Perez, Arturo; de Jesus Romero-Troncoso, Rene; Osornio-Rios, Roque Alfredo; Serroukh, Ibrahim; Cabal-Yepez, Eduardo

    2011-01-01

    The measurement of small displacements on the nanometric scale demands metrological systems of high accuracy and precision. In this context, interferometer-based displacement measurements have become the main tools used for traceable dimensional metrology. The different industrial applications in which small displacement measurements are employed requires the use of online measurements, high speed processes, open architecture control systems, as well as good adaptability to specific process conditions. The main contribution of this work is the development of a smart sensor for large displacement measurement based on phase measurement which achieves high accuracy and resolution, designed to be used with a commercial heterodyne interferometer. The system is based on a low-cost Field Programmable Gate Array (FPGA) allowing the integration of several functions in a single portable device. This system is optimal for high speed applications where online measurement is needed and the reconfigurability feature allows the addition of different modules for error compensation, as might be required by a specific application. PMID:22164040

  2. Robust continuous clustering

    PubMed Central

    Shah, Sohil Atul

    2017-01-01

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838

  3. LANDSAT-D Thematic Mapper image dimensionality reduction and geometric correction accuracy. [Walnut Creek Watershed, Texas

    NASA Technical Reports Server (NTRS)

    Ford, G. E. (Principal Investigator)

    1984-01-01

    Principal components transformations was applied to a Walnut Creek, Texas subscene to reduce the dimensionality of the multispectral sensor data. This transformation was also applied to a LANDSAT 3 MSS subscene of the same area acquired in a different season and year. Results of both procedures are tabulated and allow for comparisons between TM and MSS data. The TM correlation matrix shows that visible bands 1 to 3 exhibit a high degree of correlation in the range 0.92 to 0.96. Correlation for bands 5 to 7 is 0.93. Band 4 is not highly correlated with any other band, with corrections in the range 0.13 to 0.52. The thermal band (6) is not highly correlated with other bands in the range 0.13 to 0.46. The MSS correlation matrix shows that bands 4 and 5 are highly correlated (0.96) as are bands 6 and 7 with a correlation of 0.92.

  4. Space shuttle main engine numerical modeling code modifications and analysis

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.

    1988-01-01

    The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).

  5. A three-dimensional parabolic equation model of sound propagation using higher-order operator splitting and Padé approximants.

    PubMed

    Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F

    2012-11-01

    An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.

  6. Straight velocity boundaries in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Latt, Jonas; Chopard, Bastien; Malaspinas, Orestis; Deville, Michel; Michler, Andreas

    2008-05-01

    Various ways of implementing boundary conditions for the numerical solution of the Navier-Stokes equations by a lattice Boltzmann method are discussed. Five commonly adopted approaches are reviewed, analyzed, and compared, including local and nonlocal methods. The discussion is restricted to velocity Dirichlet boundary conditions, and to straight on-lattice boundaries which are aligned with the horizontal and vertical lattice directions. The boundary conditions are first inspected analytically by applying systematically the results of a multiscale analysis to boundary nodes. This procedure makes it possible to compare boundary conditions on an equal footing, although they were originally derived from very different principles. It is concluded that all five boundary conditions exhibit second-order accuracy, consistent with the accuracy of the lattice Boltzmann method. The five methods are then compared numerically for accuracy and stability through benchmarks of two-dimensional and three-dimensional flows. None of the methods is found to be throughout superior to the others. Instead, the choice of a best boundary condition depends on the flow geometry, and on the desired trade-off between accuracy and stability. From the findings of the benchmarks, the boundary conditions can be classified into two major groups. The first group comprehends boundary conditions that preserve the information streaming from the bulk into boundary nodes and complete the missing information through closure relations. Boundary conditions in this group are found to be exceptionally accurate at low Reynolds number. Boundary conditions of the second group replace all variables on boundary nodes by new values. They exhibit generally much better numerical stability and are therefore dedicated for use in high Reynolds number flows.

  7. Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products.

    PubMed

    Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C

    2018-06-01

    The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A THREE-DIMENSIONAL NUMERICAL SOLUTION FOR THE SHAPE OF A ROTATIONALLY DISTORTED POLYTROPE OF INDEX UNITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kong, Dali; Zhang, Keke; Schubert, Gerald

    2013-02-15

    We present a new three-dimensional numerical method for calculating the non-spherical shape and internal structure of a model of a rapidly rotating gaseous body with a polytropic index of unity. The calculation is based on a finite-element method and accounts for the full effects of rotation. After validating the numerical approach against the asymptotic solution of Chandrasekhar that is valid only for a slowly rotating gaseous body, we apply it to models of Jupiter and a rapidly rotating, highly flattened star ({alpha} Eridani). In the case of Jupiter, the two-dimensional distributions of density and pressure are determined via a hybridmore » inverse approach by adjusting an a priori unknown coefficient in the equation of state until the model shape matches the observed shape of Jupiter. After obtaining the two-dimensional distribution of density, we then compute the zonal gravity coefficients and the total mass from the non-spherical model that takes full account of rotation-induced shape change. Our non-spherical model with a polytropic index of unity is able to produce the known mass of Jupiter with about 4% accuracy and the zonal gravitational coefficient J {sub 2} of Jupiter with better than 2% accuracy, a reasonable result considering that there is only one parameter in the model. For {alpha} Eridani, we calculate its rotationally distorted shape and internal structure based on the observationally deduced rotation rate and size of the star by using a similar hybrid inverse approach. Our model of the star closely approximates the observed flattening.« less

  9. Eigenspace-based fuzzy c-means for sensing trending topics in Twitter

    NASA Astrophysics Data System (ADS)

    Muliawati, T.; Murfi, H.

    2017-07-01

    As the information and communication technology are developed, the fulfillment of information can be obtained through social media, like Twitter. The enormous number of internet users has triggered fast and large data flow, thus making the manual analysis is difficult or even impossible. An automated methods for data analysis is needed, one of which is the topic detection and tracking. An alternative method other than latent Dirichlet allocation (LDA) is a soft clustering approach using Fuzzy C-Means (FCM). FCM meets the assumption that a document may consist of several topics. However, FCM works well in low-dimensional data but fails in high-dimensional data. Therefore, we propose an approach where FCM works on low-dimensional data by reducing the data using singular value decomposition (SVD). Our simulations show that this approach gives better accuracies in term of topic recall than LDA for sensing trending topic in Twitter about an event.

  10. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  11. Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

  12. Elitist Binary Wolf Search Algorithm for Heuristic Feature Selection in High-Dimensional Bioinformatics Datasets.

    PubMed

    Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L

    2017-06-28

    Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.

  13. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  14. The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area

    NASA Astrophysics Data System (ADS)

    Yeh, M. L.; Chou, Y. T.; Yang, L. S.

    2016-06-01

    The efficiency and high mobility of Unmanned Aerial Vehicle (UAV) made them essential to aerial photography assisted survey and mapping. Especially for urban land use and land cover, that they often changes, and need UAVs to obtain new terrain data and the new changes of land use. This study aims to collect image data and three dimensional ground control points in Taichung city area with Unmanned Aerial Vehicle (UAV), general camera and Real-Time Kinematic with positioning accuracy down to centimetre. The study area is an ecological park that has a low topography which support the city as a detention basin. A digital surface model was also built with Agisoft PhotoScan, and there will also be a high resolution orthophotos. There will be two conditions for this study, with or without ground control points and both were discussed and compared for the accuracy level of each of the digital surface models. According to check point deviation estimate, the model without ground control points has an average two-dimension error up to 40 centimeter, altitude error within one meter. The GCP-free RTK-airborne approach produces centimeter-level accuracy with excellent to low risk to the UAS operators. As in the case of the model with ground control points, the accuracy of x, y, z coordinates has gone up 54.62%, 49.07%, and 87.74%, and the accuracy of altitude has improved the most.

  15. Automatic detection of a prefrontal cortical response to emotionally rated music using multi-channel near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Moghimi, Saba; Kushki, Azadeh; Power, Sarah; Guerguerian, Anne Marie; Chau, Tom

    2012-04-01

    Emotional responses can be induced by external sensory stimuli. For severely disabled nonverbal individuals who have no means of communication, the decoding of emotion may offer insight into an individual’s state of mind and his/her response to events taking place in the surrounding environment. Near-infrared spectroscopy (NIRS) provides an opportunity for bed-side monitoring of emotions via measurement of hemodynamic activity in the prefrontal cortex, a brain region known to be involved in emotion processing. In this paper, prefrontal cortex activity of ten able-bodied participants was monitored using NIRS as they listened to 78 music excerpts with different emotional content and a control acoustic stimuli consisting of the Brown noise. The participants rated their emotional state after listening to each excerpt along the dimensions of valence (positive versus negative) and arousal (intense versus neutral). These ratings were used to label the NIRS trial data. Using a linear discriminant analysis-based classifier and a two-dimensional time-domain feature set, trials with positive and negative emotions were discriminated with an average accuracy of 71.94% ± 8.19%. Trials with audible Brown noise representing a neutral response were differentiated from high arousal trials with an average accuracy of 71.93% ± 9.09% using a two-dimensional feature set. In nine out of the ten participants, response to the neutral Brown noise was differentiated from high arousal trials with accuracies exceeding chance level, and positive versus negative emotional differentiation accuracies exceeded the chance level in seven out of the ten participants. These results illustrate that NIRS recordings of the prefrontal cortex during presentation of music with emotional content can be automatically decoded in terms of both valence and arousal encouraging future investigation of NIRS-based emotion detection in individuals with severe disabilities.

  16. Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.

    PubMed

    Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet

    2018-05-01

    Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  18. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  19. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  20. Acquiring basic and advanced laparoscopic skills in novices using two-dimensional (2D), three-dimensional (3D) and ultra-high definition (4K) vision systems: A randomized control study.

    PubMed

    Abdelrahman, M; Belramman, A; Salem, R; Patel, B

    2018-05-01

    To compare the performance of novices in laparoscopic peg transfer and intra-corporeal suturing tasks in two-dimensional (2D), three-dimensional (3D) and ultra-high definition (4K) vision systems. Twenty-four novices were randomly assigned to 2D, 3D and 4K groups, eight in each group. All participants performed the two tasks on a box trainer until reaching proficiency. Their performance was assessed based on completion time, number of errors and number of repetitions using the validated FLS proficiency criteria. Eight candidates in each group completed the training curriculum. The mean performance time (in minutes) for the 2D group was 558.3, which was more than that of the 3D and 4K groups of 316.7 and 310.4 min respectively (P < 0.0001). The mean number of repetitions was lower for the 3D and 4K groups versus the 2D group: 125.9 and 127.4 respectively versus 152.1 (P < 0.0001). The mean number of errors was lower for the 4K group versus the 3D and 2D groups: 1.2 versus 26.1 and 50.2 respectively (P < 0.0001). The 4K vision system improved accuracy in acquiring laparoscopic skills for novices in complex tasks, which was shown in significant reduction in number of errors compared to the 3D and the 2D vision systems. The 3D and the 4K vision systems significantly improved speed and accuracy when compared to the 2D vision system based on shorter performance time, fewer errors and lesser number of repetitions. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  2. Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow

    DTIC Science & Technology

    2013-04-02

    standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver , the flux integration within the discontinuous Galerkin method is now realized by

  3. A new optical head tracing reflected light for nanoprofiler

    NASA Astrophysics Data System (ADS)

    Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.

    2014-09-01

    High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.

  4. Precision and accuracy of suggested maxillary and mandibular landmarks with cone-beam computed tomography for regional superimpositions: An in vitro study.

    PubMed

    Lemieux, Genevieve; Carey, Jason P; Flores-Mir, Carlos; Secanell, Marc; Hart, Adam; Lagravère, Manuel O

    2016-01-01

    Our objective was to identify and evaluate the accuracy and precision (intrarater and interrater reliabilities) of various anatomic landmarks for use in 3-dimensional maxillary and mandibular regional superimpositions. We used cone-beam computed tomography reconstructions of 10 human dried skulls to locate 10 landmarks in the maxilla and the mandible. Precision and accuracy were assessed with intrarater and interrater readings. Three examiners located these landmarks in the cone-beam computed tomography images 3 times with readings scheduled at 1-week intervals. Three-dimensional coordinates were determined (x, y, and z coordinates), and the intraclass correlation coefficient was computed to determine intrarater and interrater reliabilities, as well as the mean error difference and confidence intervals for each measurement. Bilateral mental foramina, bilateral infraorbital foramina, anterior nasal spine, incisive canal, and nasion showed the highest precision and accuracy in both intrarater and interrater reliabilities. Subspinale and bilateral lingulae had the lowest precision and accuracy in both intrarater and interrater reliabilities. When choosing the most accurate and precise landmarks for 3-dimensional cephalometric analysis or plane-derived maxillary and mandibular superimpositions, bilateral mental and infraorbital foramina, landmarks in the anterior region of the maxilla, and nasion appeared to be the best options of the analyzed landmarks. Caution is needed when using subspinale and bilateral lingulae because of their higher mean errors in location. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  5. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  6. A Special Investigation to Develop a General Method for Three-dimensional Photoelastic Stress Analysis

    NASA Technical Reports Server (NTRS)

    Frocht, M M; Guernsey, R , Jr

    1953-01-01

    The method of strain measurement after annealing is reviewed and found to be satisfactory for the materials available in this country. A new general method is described for the photoelastic determination of the principal stresses at any point of a general body subjected to arbitrary load. The method has been applied to a sphere subjected to diametrical compressive loads. The results show possibilities of high accuracy.

  7. CNC Machining Of The Complex Copper Electrodes

    NASA Astrophysics Data System (ADS)

    Popan, Ioan Alexandru; Balc, Nicolae; Popan, Alina

    2015-07-01

    This paper presents the machining process of the complex copper electrodes. Machining of the complex shapes in copper is difficult because this material is soft and sticky. This research presents the main steps for processing those copper electrodes at a high dimensional accuracy and a good surface quality. Special tooling solutions are required for this machining process and optimal process parameters have been found for the accurate CNC equipment, using smart CAD/CAM software.

  8. Qualitative and quantitative three-dimensional accuracy of a single tooth captured by elastomeric impression materials: an in vitro study.

    PubMed

    Schaefer, Oliver; Schmidt, Monika; Goebel, Roland; Kuepper, Harald

    2012-09-01

    The accuracy of impressions has been described in 1 or 2 dimensions, whereas it is most desirable to evaluate the accuracy of impressions spatially, in 3 dimensions. The purpose of this study was to demonstrate the accuracy and reproducibility of a 3-dimensional (3-D) approach to assessing impression preciseness and to quantitatively comparing the occlusal correctness of gypsum dies made with different impression materials. By using an aluminum replica of a maxillary molar, single-step dual viscosity impressions were made with 1 polyether/vinyl polysiloxane hybrid material (Identium), 1 vinyl polysiloxane (Panasil), and 1 polyether (Impregum) (n=5). Corresponding dies were made of Type IV gypsum and were optically digitized and aligned to the virtual reference of the aluminum tooth. Accuracy was analyzed by computing mean quadratic deviations between the virtual reference and the gypsum dies, while deviations of the dies among one another determined the reproducibility of the method. The virtual reference was adapted to create 15 occlusal contact points. The percentage of contact points deviating within a ±10 µm tolerance limit (PDP(10) = Percentage of Deviating Points within ±10 µm Tolerance) was set as the index for assessing occlusal accuracy. Visual results for the difference from the reference tooth were displayed with colors, whereas mean deviation values as well as mean PDP(10) differences were analyzed with a 1-way ANOVA and Scheffé post hoc comparisons (α=.05). Objective characterization of accuracy showed smooth axial surfaces to be undersized, whereas occlusal surfaces were accurate or enlarged when compared to the original tooth. The accuracy of the gypsum replicas ranged between 3 and 6 µm, while reproducibility results varied from 2 to 4 µm. Mean (SD) PDP(10)-values were: Panasil 91% (±11), Identium 77% (±4) and Impregum 29% (±3). One-way ANOVA detected significant differences among the subjected impression materials (P<.001). The accuracy and reproducibility of impressions were determined by 3-D analysis. Results were presented as color images and the newly developed PDP(10)-index was successfully used to quantify spatial dimensions for complex occlusal anatomy. Impression materials with high PDP(10)-values were shown to reproduce occlusal dimensions the most accurately. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  9. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  10. Advantages of multigrid methods for certifying the accuracy of PDE modeling

    NASA Technical Reports Server (NTRS)

    Forester, C. K.

    1981-01-01

    Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.

  11. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  12. Sensor assembly method using silicon interposer with trenches for three-dimensional binocular range sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka

    2018-04-01

    To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.

  13. Beta Testing of CFD Code for the Analysis of Combustion Systems

    NASA Technical Reports Server (NTRS)

    Yee, Emma; Wey, Thomas

    2015-01-01

    A preliminary version of OpenNCC was tested to assess its accuracy in generating steady-state temperature fields for combustion systems at atmospheric conditions using three-dimensional tetrahedral meshes. Meshes were generated from a CAD model of a single-element lean-direct injection combustor, and the latest version of OpenNCC was used to calculate combustor temperature fields. OpenNCC was shown to be capable of generating sustainable reacting flames using a tetrahedral mesh, and the subsequent results were compared to experimental results. While nonreacting flow results closely matched experimental results, a significant discrepancy was present between the code's reacting flow results and experimental results. When wide air circulation regions with high velocities were present in the model, this appeared to create inaccurately high temperature fields. Conversely, low recirculation velocities caused low temperature profiles. These observations will aid in future modification of OpenNCC reacting flow input parameters to improve the accuracy of calculated temperature fields.

  14. Modification of the Douglas Neumann program to improve the efficiency of predicting component interference and high lift characteristics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Grose, G. G.

    1978-01-01

    The Douglas Neumann method for low-speed potential flow on arbitrary three-dimensional lifting bodies was modified by substituting the combined source and doublet surface paneling based on Green's identity for the original source panels. Numerical studies show improved accuracy and stability for thin lifting surfaces, permitting reduced panel number for high-lift devices and supercritical airfoil sections. The accuracy of flow in concave corners is improved. A method of airfoil section design for a given pressure distribution, based on Green's identity, was demonstrated. The program uses panels on the body surface with constant source strength and parabolic distribution of doublet strength, and a doublet sheet on the wake. The program is written for the CDC CYBER 175 computer. Results of calculations are presented for isolated bodies, wings, wing-body combinations, and internal flow.

  15. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua

    2018-04-01

    Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.

  16. Development of a High Accuracy Angular Measurement System for Langley Research Center Hypersonic Wind Tunnel Facilities

    NASA Technical Reports Server (NTRS)

    Newman, Brett; Yu, Si-bok; Rhew, Ray D. (Technical Monitor)

    2003-01-01

    Modern experimental and test activities demand innovative and adaptable procedures to maximize data content and quality while working within severely constrained budgetary and facility resource environments. This report describes development of a high accuracy angular measurement capability for NASA Langley Research Center hypersonic wind tunnel facilities to overcome these deficiencies. Specifically, utilization of micro-electro-mechanical sensors including accelerometers and gyros, coupled with software driven data acquisition hardware, integrated within a prototype measurement system, is considered. Development methodology addresses basic design requirements formulated from wind tunnel facility constraints and current operating procedures, as well as engineering and scientific test objectives. Description of the analytical framework governing relationships between time dependent multi-axis acceleration and angular rate sensor data and the desired three dimensional Eulerian angular state of the test model is given. Calibration procedures for identifying and estimating critical parameters in the sensor hardware is also addressed.

  17. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  18. Well-balanced high-order solver for blood flow in networks of vessels with variable properties.

    PubMed

    Müller, Lucas O; Toro, Eleuterio F

    2013-12-01

    We present a well-balanced, high-order non-linear numerical scheme for solving a hyperbolic system that models one-dimensional flow in blood vessels with variable mechanical and geometrical properties along their length. Using a suitable set of test problems with exact solution, we rigorously assess the performance of the scheme. In particular, we assess the well-balanced property and the effective order of accuracy through an empirical convergence rate study. Schemes of up to fifth order of accuracy in both space and time are implemented and assessed. The numerical methodology is then extended to realistic networks of elastic vessels and is validated against published state-of-the-art numerical solutions and experimental measurements. It is envisaged that the present scheme will constitute the building block for a closed, global model for the human circulation system involving arteries, veins, capillaries and cerebrospinal fluid. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  20. Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-lyan

    2007-01-01

    The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.

  1. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  2. Accuracy of clinical observations of push-off during gait after stroke.

    PubMed

    McGinley, Jennifer L; Morris, Meg E; Greenwood, Ken M; Goldie, Patricia A; Olney, Sandra J

    2006-06-01

    To determine the accuracy (criterion-related validity) of real-time clinical observations of push-off in gait after stroke. Criterion-related validity study of gait observations. Rehabilitation hospital in Australia. Eleven participants with stroke and 8 treating physical therapists. Not applicable. Pearson product-moment correlation between physical therapists' observations of push-off during gait and criterion measures of peak ankle power generation from a 3-dimensional motion analysis system. A high correlation was obtained between the observational ratings and the measurements of peak ankle power generation (Pearson r =.98). The standard error of estimation of ankle power generation was .32W/kg. Physical therapists can make accurate real-time clinical observations of push-off during gait following stroke.

  3. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  4. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  5. Optimized stereo matching in binocular three-dimensional measurement system using structured light.

    PubMed

    Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong

    2014-09-10

    In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.

  6. Linear response approach to active Brownian particles in time-varying activity fields

    NASA Astrophysics Data System (ADS)

    Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe

    2018-05-01

    In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.

  7. High-speed all-optical DNA local sequence alignment based on a three-dimensional artificial neural network.

    PubMed

    Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra

    2017-07-01

    This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.

  8. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  9. Three-Dimensional Simulation of Traveling-Wave Tube Cold-Test Characteristics Using MAFIA

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Wilson, Jeffrey D.

    1995-01-01

    The three-dimensional simulation code MAFIA was used to compute the cold-test parameters - frequency-phase dispersion, beam on-axis interaction impedance, and attenuation - for two types of traveling-wave tube (TWT) slow-wave circuits. The potential for this electromagnetic computer modeling code to reduce the time and cost of TWT development is demonstrated by the high degree of accuracy achieved in calculating these parameters. Generalized input files were developed for ferruled coupled-cavity and TunneLadder slow-wave circuits. These files make it easy to model circuits of arbitrary dimensions. The utility of these files was tested by applying each to a specific TWT slow-wave circuit and comparing the results with experimental data. Excellent agreement was obtained.

  10. Gaussian Discriminant Analysis for Optimal Delineation of Mild Cognitive Impairment in Alzheimer's Disease.

    PubMed

    Fang, Chen; Li, Chunfei; Cabrerizo, Mercedes; Barreto, Armando; Andrian, Jean; Rishe, Naphtali; Loewenstein, David; Duara, Ranjan; Adjouadi, Malek

    2018-04-12

    Over the past few years, several approaches have been proposed to assist in the early diagnosis of Alzheimer's disease (AD) and its prodromal stage of mild cognitive impairment (MCI). Using multimodal biomarkers for this high-dimensional classification problem, the widely used algorithms include Support Vector Machines (SVM), Sparse Representation-based classification (SRC), Deep Belief Networks (DBN) and Random Forest (RF). These widely used algorithms continue to yield unsatisfactory performance for delineating the MCI participants from the cognitively normal control (CN) group. A novel Gaussian discriminant analysis-based algorithm is thus introduced to achieve a more effective and accurate classification performance than the aforementioned state-of-the-art algorithms. This study makes use of magnetic resonance imaging (MRI) data uniquely as input to two separate high-dimensional decision spaces that reflect the structural measures of the two brain hemispheres. The data used include 190 CN, 305 MCI and 133 AD subjects as part of the AD Big Data DREAM Challenge #1. Using 80% data for a 10-fold cross-validation, the proposed algorithm achieved an average F1 score of 95.89% and an accuracy of 96.54% for discriminating AD from CN; and more importantly, an average F1 score of 92.08% and an accuracy of 90.26% for discriminating MCI from CN. Then, a true test was implemented on the remaining 20% held-out test data. For discriminating MCI from CN, an accuracy of 80.61%, a sensitivity of 81.97% and a specificity of 78.38% were obtained. These results show significant improvement over existing algorithms for discriminating the subtle differences between MCI participants and the CN group.

  11. Fully automated analysis of four tobacco-specific N-nitrosamines in mainstream cigarette smoke using two-dimensional online solid phase extraction combined with liquid chromatography-tandem mass spectrometry.

    PubMed

    Zhang, Jie; Bai, Ruoshi; Yi, Xiaoli; Yang, Zhendong; Liu, Xingyu; Zhou, Jun; Liang, Wei

    2016-01-01

    A fully automated method for the detection of four tobacco-specific nitrosamines (TSNAs) in mainstream cigarette smoke (MSS) has been developed. The new developed method is based on two-dimensional online solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE/LC-MS/MS). The two dimensional SPE was performed in the method utilizing two cartridges with different extraction mechanisms to cleanup disturbances of different polarity to minimize sample matrix effects on each analyte. Chromatographic separation was achieved using a UPLC C18 reversed phase analytical column. Under the optimum online SPE/LC-MS/MS conditions, N'-nitrosonornicotine (NNN), N'-nitrosoanatabine (NAT), N'-nitrosoanabasine (NAB), and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) were baseline separated with good peak shapes. This method appears to be the most sensitive method yet reported for determination of TSNAs in mainstream cigarette smoke. The limits of quantification for NNN, NNK, NAT and NAB reached the levels of 6.0, 1.0, 3.0 and 0.6 pg/cig, respectively, which were well below the lowest levels of TSNAs in MSS of current commercial cigarettes. The accuracy of the measurement of four TSNAs was from 92.8 to 107.3%. The relative standard deviations of intra-and inter-day analysis were less than 5.4% and 7.5%, respectively. The main advantages of the method developed are fairly high sensitivity, selectivity and accuracy of results, minimum sample pre-treatment, full automation, and high throughput. As a part of the validation procedure, the developed method was applied to evaluate TSNAs yields for 27 top-selling commercial cigarettes in China. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  13. All-Dimensional H2–CO Potential: Validation with Fully Quantum Second Virial Coefficients

    PubMed Central

    Garberoglio, Giovanni; Jankowski, Piotr; Szalewicz, Krzysztof; Harvey, Allan H.

    2017-01-01

    We use a new high-accuracy all-dimensional potential to compute the cross second virial coefficient B12(T) between molecular hydrogen and carbon monoxide. The path-integral method is used to fully account for quantum effects. Values are calculated from 10 K to 2000 K and the uncertainty of the potential is propagated into uncertainties of B12. Our calculated B12(T) are in excellent agreement with most of the limited experimental data available, but cover a much wider range of temperatures and have lower uncertainties. Similar to recently reported findings from scattering calculations, we find that the reduced-dimensionality potential obtained by averaging over the rovibrational motion of the monomers gives results that are a good approximation to those obtained when flexibility is fully taken into account. Also, the four-dimensional approximation with monomers taken at their vibrationally averaged bond lengths works well. This finding is important, since full-dimensional potentials are difficult to develop even for triatomic monomers and are not currently possible to obtain for larger molecules. Likewise, most types of accurate quantum mechanical calculations, e.g., spectral or scattering, are severely limited in the number of dimensions that can be handled. PMID:28178790

  14. All-dimensional H2-CO potential: Validation with fully quantum second virial coefficients.

    PubMed

    Garberoglio, Giovanni; Jankowski, Piotr; Szalewicz, Krzysztof; Harvey, Allan H

    2017-02-07

    We use a new high-accuracy all-dimensional potential to compute the cross second virial coefficient B 12 (T) between molecular hydrogen and carbon monoxide. The path-integral method is used to fully account for quantum effects. Values are calculated from 10 K to 2000 K and the uncertainty of the potential is propagated into uncertainties of B 12 . Our calculated B 12 (T) are in excellent agreement with most of the limited experimental data available, but cover a much wider range of temperatures and have lower uncertainties. Similar to recently reported findings from scattering calculations, we find that the reduced-dimensionality potential obtained by averaging over the rovibrational motion of the monomers gives results that are a good approximation to those obtained when flexibility is fully taken into account. Also, the four-dimensional approximation with monomers taken at their vibrationally averaged bond lengths works well. This finding is important, since full-dimensional potentials are difficult to develop even for triatomic monomers and are not currently possible to obtain for larger molecules. Likewise, most types of accurate quantum mechanical calculations, e.g., spectral or scattering, are severely limited in the number of dimensions that can be handled.

  15. High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium

    NASA Astrophysics Data System (ADS)

    Barnett, Alex H.; Nelson, Bradley J.; Mahoney, J. Matthew

    2015-09-01

    We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve, the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (Δ + E +x2) u (x1 ,x2) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 102 nodes, with an effort that is independent of the frequency parameter E. By combining with a high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.

  16. Development of a data-processing method based on Bayesian k-means clustering to discriminate aneugens and clastogens in a high-content micronucleus assay.

    PubMed

    Huang, Z H; Li, N; Rao, K F; Liu, C T; Huang, Y; Ma, M; Wang, Z J

    2018-03-01

    Genotoxicants can be identified as aneugens and clastogens through a micronucleus (MN) assay. The current high-content screening-based MN assays usually discriminate an aneugen from a clastogen based on only one parameter, such as the MN size, intensity, or morphology, which yields low accuracies (70-84%) because each of these parameters may contribute to the results. Therefore, the development of an algorithm that can synthesize high-dimensionality data to attain comparative results is important. To improve the automation and accuracy of detection using the current parameter-based mode of action (MoA), the MN MoA signatures of 20 chemicals were systematically recruited in this study to develop an algorithm. The results of the algorithm showed very good agreement (93.58%) between the prediction and reality, indicating that the proposed algorithm is a validated analytical platform for the rapid and objective acquisition of genotoxic MoA messages.

  17. Study of high field side/low field side asymmetry in the electron temperature profile with electron cyclotron emission

    NASA Astrophysics Data System (ADS)

    Gugliada, V. R.; Austin, M. E.; Brookman, M. W.

    2017-10-01

    Electron cyclotron emission (ECE) provides high resolution measurements of electron temperature profiles (Te(R , t)) in tokamaks. Calibration accuracy of this data can be improved using a sawtooth averaging technique. This improved calibration will then be utilized to determine the symmetry of Te profiles by comparing low field side (LFS) and high field side (HFS) measurements. Although Te is considered constant on flux surfaces, cases have been observed in which there are pronounced asymmetries about the magnetic axis, particularly with increased pressure. Trends in LFS/HFS overlap are examined as functions of plasma pressure, MHD mode presence, heating techniques, and other discharge conditions. This research will provide information on the accuracy of the current two-dimensional mapping of flux surfaces in the tokamak. Findings can be used to generate higher quality EFITs and inform ECE calibration. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER549698.

  18. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  19. Parametric Loop Division for 3D Localization in Wireless Sensor Networks

    PubMed Central

    Ahmad, Tanveer

    2017-01-01

    Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714

  20. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritsenko, Marina A.; Xu, Zhe; Liu, Tao

    Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification andmore » quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.« less

  1. Large-Scale and Deep Quantitative Proteome Profiling Using Isobaric Labeling Coupled with Two-Dimensional LC-MS/MS.

    PubMed

    Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D

    2016-01-01

    Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.

  2. Direct 3-D morphological measurements of silicone rubber impression using micro-focus X-ray CT.

    PubMed

    Kamegawa, Masayuki; Nakamura, Masayuki; Fukui, Yu; Tsutsumi, Sadami; Hojo, Masaki

    2010-01-01

    Three-dimensional computer models of dental arches play a significant role in prosthetic dentistry. The microfocus X-ray CT scanner has the advantage of capturing precise 3D shapes of deep fossa, and we propose a new method of measuring the three-dimensional morphology of a dental impression directly, which will eliminate the conversion process to dental casts. Measurement precision and accuracy were evaluated using a standard gage comprised of steel balls which simulate the dental arch. Measurement accuracy, standard deviation of distance distribution of superimposed models, was determined as +/-0.050 mm in comparison with a CAD model. Impressions and casts of an actual dental arch were scanned by microfocus X-ray CT and three-dimensional models were compared. The impression model had finer morphology, especially around the cervical margins of teeth. Within the limitations of the current study, direct three-dimensional impression modeling was successfully demonstrated using microfocus X-ray CT.

  3. 3D-printing zirconia implants; a dream or a reality? An in-vitro study evaluating the dimensional accuracy, surface topography and mechanical properties of printed zirconia implant and discs.

    PubMed

    Osman, Reham B; van der Veen, Albert J; Huiberts, Dennis; Wismeijer, Daniel; Alharbi, Nawal

    2017-11-01

    The aim of this study was to evaluate the dimensional accuracy, surface topography of a custom designed, 3D-printed zirconia dental implant and the mechanical properties of printed zirconia discs. A custom designed implant was 3D-printed in zirconia using digital light processing technique (DLP). The dimensional accuracy was assessed using the digital-subtraction technique. The mechanical properties were evaluated using biaxial flexure strength test. Three different build angles were adopted to print the specimens for the mechanical test; 0°(Vertical), 45° (Oblique) and 90°(Horizontal) angles. The surface topography, crystallographic phase structure and surface roughness were evaluated using scanning electron microscopy analysis (SEM), X-ray diffractometer and confocal microscopy respectively. The printed implant was dimensionally accurate with a root mean square (RMSE) value of 0.1mm. The Weibull analysis revealed a statistically significant higher characteristic strength (1006.6MPa) of 0° printed specimens compared to the other two groups and no significant difference between 45° (892.2MPa) and 90° (866.7MPa) build angles. SEM analysis revealed cracks, micro-porosities and interconnected pores ranging in size from 196nm to 3.3µm. The mean Ra (arithmetic mean roughness) value of 1.59µm (±0.41) and Rq (root mean squared roughness) value of 1.94µm (±0.47) was found. A crystallographic phase of primarily tetragonal zirconia typical of sintered Yttria tetragonal stabilized zirconia (Y-TZP) was detected. DLP prove to be efficient for printing customized zirconia dental implants with sufficient dimensional accuracy. The mechanical properties showed flexure strength close to those of conventionally produced ceramics. Optimization of the 3D-printing process parameters is still needed to improve the microstructure of the printed objects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  5. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  6. Advances in three-dimensional field analysis and evaluation of performance parameters of electrical machines

    NASA Astrophysics Data System (ADS)

    Sivasubramaniam, Kiruba

    This thesis makes advances in three dimensional finite element analysis of electrical machines and the quantification of their parameters and performance. The principal objectives of the thesis are: (1)the development of a stable and accurate method of nonlinear three-dimensional field computation and application to electrical machinery and devices; and (2)improvement in the accuracy of determination of performance parameters, particularly forces and torque computed from finite elements. Contributions are made in two general areas: a more efficient formulation for three dimensional finite element analysis which saves time and improves accuracy, and new post-processing techniques to calculate flux density values from a given finite element solution. A novel three-dimensional magnetostatic solution based on a modified scalar potential method is implemented. This method has significant advantages over the traditional total scalar, reduced scalar or vector potential methods. The new method is applied to a 3D geometry of an iron core inductor and a permanent magnet motor. The results obtained are compared with those obtained from traditional methods, in terms of accuracy and speed of computation. A technique which has been observed to improve force computation in two dimensional analysis using a local solution of Laplace's equation in the airgap of machines is investigated and a similar method is implemented in the three dimensional analysis of electromagnetic devices. A new integral formulation to improve force calculation from a smoother flux-density profile is also explored and implemented. Comparisons are made and conclusions drawn as to how much improvement is obtained and at what cost. This thesis also demonstrates the use of finite element analysis to analyze torque ripples due to rotor eccentricity in permanent magnet BLDC motors. A new method for analyzing torque harmonics based on data obtained from a time stepping finite element analysis of the machine is explored and implemented.

  7. Self-dual random-plaquette gauge model and the quantum toric code

    NASA Astrophysics Data System (ADS)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-05-01

    We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.

  8. A binary method for simple and accurate two-dimensional cursor control from EEG with minimal subject training.

    PubMed

    Kayagil, Turan A; Bai, Ou; Henriquez, Craig S; Lin, Peter; Furlani, Stephen J; Vorbach, Sherry; Hallett, Mark

    2009-05-06

    Brain-computer interfaces (BCI) use electroencephalography (EEG) to interpret user intention and control an output device accordingly. We describe a novel BCI method to use a signal from five EEG channels (comprising one primary channel with four additional channels used to calculate its Laplacian derivation) to provide two-dimensional (2-D) control of a cursor on a computer screen, with simple threshold-based binary classification of band power readings taken over pre-defined time windows during subject hand movement. We tested the paradigm with four healthy subjects, none of whom had prior BCI experience. Each subject played a game wherein he or she attempted to move a cursor to a target within a grid while avoiding a trap. We also present supplementary results including one healthy subject using motor imagery, one primary lateral sclerosis (PLS) patient, and one healthy subject using a single EEG channel without Laplacian derivation. For the four healthy subjects using real hand movement, the system provided accurate cursor control with little or no required user training. The average accuracy of the cursor movement was 86.1% (SD 9.8%), which is significantly better than chance (p = 0.0015). The best subject achieved a control accuracy of 96%, with only one incorrect bit classification out of 47. The supplementary results showed that control can be achieved under the respective experimental conditions, but with reduced accuracy. The binary method provides naïve subjects with real-time control of a cursor in 2-D using dichotomous classification of synchronous EEG band power readings from a small number of channels during hand movement. The primary strengths of our method are simplicity of hardware and software, and high accuracy when used by untrained subjects.

  9. The impact of different cone beam computed tomography and multi-slice computed tomography scan parameters on virtual three-dimensional model accuracy using a highly precise ex vivo evaluation method.

    PubMed

    Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian

    2016-05-01

    Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Chebyshev collocation spectral method for one-dimensional radiative heat transfer in linearly anisotropic-scattering cylindrical medium

    NASA Astrophysics Data System (ADS)

    Zhou, Rui-Rui; Li, Ben-Wen

    2017-03-01

    In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.

  11. Accuracy study of computer-assisted drilling: the effect of bone density, drill bit characteristics, and use of a mechanical guide.

    PubMed

    Hüfner, T; Geerling, J; Oldag, G; Richter, M; Kfuri, M; Pohlemann, T; Krettek, C

    2005-01-01

    This study was designed to determine the clinical relevant accuracy of CT-based navigation for drilling. Experimental model. Laboratory. Twelve drills of varying lengths and diameters were tested with 2 different set-ups. Group 1 used free-hand navigated drilling technique with foam blocks equipped with titanium target points. Group 2 (control) used a newly developed 3-dimensional measurement device equipped with titanium target points with a fixed entry for the navigated drill to minimize bending forces. One examiner performed 690 navigated drillings using solely the monitor screen for control in both groups. The difference between the planned and the actual starting and target point (up to 150 mm distance) was measured (mm). Levene test and a nonpaired t test. Significance level was set as P < 0.05. The core accuracy of the navigation system measured with the 3-dimensional device was 0.5 mm. The mean distance from planned to actual entry points in group 1 was 1.3 (range, 0.6-3.4 mm). The mean distance between planned and actual target point was 3.4 (range, 1.7-5.8 mm). Free-hand navigated drilling showed an increased difference with increased length of the drill bits as well as with increased drilling channel for drill bits 2.5 and 3.2 mm and not for 3.5 and 4.5 mm (P < 0.05). The core accuracy of the navigation system is high. Compared with the navigated free-hand technique, the results suggest that drill bit deflection interferes directly with the precision. The precision is decreased when using small diameter and longer drill bits.

  12. Comparison of laser anemometer measurements and theory in an annular turbine cascade with experimental accuracy determined by parameter estimation

    NASA Technical Reports Server (NTRS)

    Goldman, L. J.; Seasholtz, R. G.

    1982-01-01

    Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.

  13. Improved finite element methodology for integrated thermal structural analysis

    NASA Technical Reports Server (NTRS)

    Dechaumphai, P.; Thornton, E. A.

    1982-01-01

    An integrated thermal-structural finite element approach for efficient coupling of thermal and structural analysis is presented. New thermal finite elements which yield exact nodal and element temperatures for one dimensional linear steady state heat transfer problems are developed. A nodeless variable formulation is used to establish improved thermal finite elements for one dimensional nonlinear transient and two dimensional linear transient heat transfer problems. The thermal finite elements provide detailed temperature distributions without using additional element nodes and permit a common discretization with lower order congruent structural finite elements. The accuracy of the integrated approach is evaluated by comparisons with analytical solutions and conventional finite element thermal structural analyses for a number of academic and more realistic problems. Results indicate that the approach provides a significant improvement in the accuracy and efficiency of thermal stress analysis for structures with complex temperature distributions.

  14. Augmented Reality Using Transurethral Ultrasound for Laparoscopic Radical Prostatectomy: Preclinical Evaluation.

    PubMed

    Lanchon, Cecilia; Custillon, Guillaume; Moreau-Gaudry, Alexandre; Descotes, Jean-Luc; Long, Jean-Alexandre; Fiard, Gaelle; Voros, Sandrine

    2016-07-01

    To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  15. Memory color of natural familiar objects: effects of surface texture and 3-D shape.

    PubMed

    Vurro, Milena; Ling, Yazhu; Hurlbert, Anya C

    2013-06-28

    Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.

  16. The accuracy of three-dimensional fused deposition modeling (FDM) compared with three-dimensional CT-Scans on the measurement of the mandibular ramus vertical length, gonion-menton length, and gonial angle

    NASA Astrophysics Data System (ADS)

    Savitri, I. T.; Badri, C.; Sulistyani, L. D.

    2017-08-01

    Presurgical treatment planning plays an important role in the reconstruction and correction of defects in the craniomaxillofacial region. The advance of solid freeform fabrication techniques has significantly improved the process of preparing a biomodel using computer-aided design and data from medical imaging. Many factors are implicated in the accuracy of the 3D model. To determine the accuracy of three-dimensional fused deposition modeling (FDM) models compared with three-dimensional CT scans in the measurement of the mandibular ramus vertical length, gonion-menton length, and gonial angle. Eight 3D models were produced from the CT scan data (DICOM file) of eight patients at the Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Indonesia, Cipto Mangunkusumo Hospital. Three measurements were done three times by two examiners. The measurements of the 3D CT scans were made using OsiriX software, while the measurements of the 3D models were made using a digital caliper and goniometry. The measurement results were then compared. There is no significant difference between the measurements of the mandibular ramus vertical length, gonion-menton length, and gonial angle using 3D CT scans and FDM 3D models. FDM 3D models are considered accurate and are acceptable for clinical applications in dental and craniomaxillofacial surgery.

  17. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  18. EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection

    NASA Astrophysics Data System (ADS)

    Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.

    2017-07-01

    In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.

  19. Edge detection and localization with edge pattern analysis and inflection characterization

    NASA Astrophysics Data System (ADS)

    Jiang, Bo

    2012-05-01

    In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.

  20. High-order central ENO finite-volume scheme for hyperbolic conservation laws on three-dimensional cubed-sphere grids

    NASA Astrophysics Data System (ADS)

    Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.

    2015-02-01

    A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.

  1. High precise measurement of tiny angle dimensional holes for the unit-holes of the LAMOST Focal Plane Plate

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Jin, Yi; Zhai, Chao; Xing, Xiaozheng

    2008-07-01

    In the LAMOST project, the unit-holes on the Focal Plane Plate are the final installation location of the optical fiber positioning system. Theirs precision will influence the observation efficiency of the LAMOST. For the unique requirements, the unit-holes on the Focal Plane Plate are composed by a series of tiny angle dimensional holes which dimensional angle are between 16' to 2.5°. According to these requirements, the measurement of the tiny angle dimensional holes for the unit-holes needs to less than 3'. And all the unit-holes point to the virtual sphere center of the Focal Plane Plate. To that end, the angle departure of the unit-holes axis is changed to the distance from the virtual sphere center of Focal Plane Plate to the unit-holes axis. That is the better way to evaluate the technical requirements of the dimensional angle errors. In the measuring process, common measuring methods do not fit for the tiny angle dimensional hole by CMM(coordinate measurement machine). An extraordinary way to solve this problem is to insert a measuring stick into a unit-hole, with a target ball on the stick. Then measure the low point of the ball center and pull out the stick for the high station of center. Finally, calculate the two points for the unit-hole axis to get the angle departure. But on the other hand, use this methods will bring extra errors for the measuring stick and the target ball. For better analysis this question, a series experiments are mentioned in this paper, which testify that the influence of the measure implement is little. With increasing the distance between the low point and the high point position in the measuring process should enhance the accuracy of dimensional angle measurement.

  2. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  3. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  4. The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates

    NASA Astrophysics Data System (ADS)

    D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier

    2018-01-01

    This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.

  5. Integrated calibration of multiview phase-measuring profilometry

    NASA Astrophysics Data System (ADS)

    Lee, Yeong Beum; Kim, Min H.

    2017-11-01

    Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.

  6. 3D documentation of footwear impressions and tyre tracks in snow with high resolution optical surface scanning.

    PubMed

    Buck, Ursula; Albertini, Nicola; Naether, Silvio; Thali, Michael J

    2007-09-13

    The three-dimensional documentation of footwear and tyre impressions in snow offers an opportunity to capture additional fine detail for the identification as present photographs. For this approach, up to now, different casting methods have been used. Casting of footwear impressions in snow has always been a difficult assignment. This work demonstrates that for the three-dimensional documentation of impressions in snow the non-destructive method of 3D optical surface scanning is suitable. The new method delivers more detailed results of higher accuracy than the conventional casting techniques. The results of this easy to use and mobile 3D optical surface scanner were very satisfactory in different meteorological and snow conditions. The method is also suitable for impressions in soil, sand or other materials. In addition to the side by side comparison, the automatic comparison of the 3D models and the computation of deviations and accuracy of the data simplify the examination and delivers objective and secure results. The results can be visualized efficiently. Data exchange between investigating authorities at a national or an international level can be achieved easily with electronic data carriers.

  7. Parallel Tensor Compression for Large-Scale Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less

  8. Plastic Surgery Applications Using Three-Dimensional Planning and Computer-Assisted Design and Manufacturing.

    PubMed

    Pfaff, Miles J; Steinbacher, Derek M

    2016-03-01

    Three-dimensional analysis and planning is a powerful tool in plastic and reconstructive surgery, enabling improved diagnosis, patient education and communication, and intraoperative transfer to achieve the best possible results. Three-dimensional planning can increase efficiency and accuracy, and entails five core components: (1) analysis, (2) planning, (3) virtual surgery, (4) three-dimensional printing, and (5) comparison of planned to actual results. The purpose of this article is to provide an overview of three-dimensional virtual planning and to provide a framework for applying these systems to clinical practice. Therapeutic, V.

  9. Detecting atrial fibrillation by deep convolutional neural networks.

    PubMed

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Extra-dimensional Demons: a method for incorporating missing tissue in deformable image registration.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Reh, Douglas D; Gallia, Gary L; Siewerdsen, Jeffrey H

    2012-09-01

    A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to "tissue" in the moving image and "air" in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed "extra-dimensional" Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate "ejection" of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.

  11. Skin inspired fractal strain sensors using a copper nanowire and graphite microflake hybrid conductive network.

    PubMed

    Jason, Naveen N; Wang, Stephen J; Bhanushali, Sushrut; Cheng, Wenlong

    2016-09-22

    This work demonstrates a facile "paint-on" approach to fabricate highly stretchable and highly sensitive strain sensors by combining one-dimensional copper nanowire networks with two-dimensional graphite microflakes. This paint-on approach allows for the fabrication of electronic skin (e-skin) patches which can directly replicate with high fidelity the human skin surface they are on, regardless of the topological complexity. This leads to high accuracy for detecting biometric signals for applications in personalised wearable sensors. The copper nanowires contribute to high stretchability and the graphite flakes offer high sensitivity, and their hybrid coating offers the advantages of both. To understand the topological effects on the sensing performance, we utilized fractal shaped elastomeric substrates and systematically compared their stretchability and sensitivity. We could achieve a high stretchability of up to 600% and a maximum gauge factor of 3000. Our simple yet efficient paint-on approach enabled facile fine-tuning of sensitivity/stretchability simply by adjusting ratios of 1D vs. 2D materials in the hybrid coating, and the topological structural designs. This capability leads to a wide range of biomedical sensors demonstrated here, including pulse sensors, prosthetic hands, and a wireless ankle motion sensor.

  12. One-Dimensional Convective Thermal Evolution Calculation Using a Modified Mixing Length Theory: Application to Saturnian Icy Satellites

    NASA Astrophysics Data System (ADS)

    Kamata, Shunichi

    2018-01-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection, for a bottom-heated convective layer. Adopting this new definition of l, I investigate the thermal evolution of Saturnian icy satellites, Dione and Enceladus, under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a thick global subsurface ocean suggested from geophysical analyses. Dynamical tides may be able to account for such an amount of heat, though the reference viscosity of Dione's ice and the ammonia content of Dione's ocean need to be very high. Otherwise, a thick global ocean in Dione cannot be maintained, implying that its shell is not in a minimum stress state.

  13. Computerized tomography with 3-dimensional reconstruction for the evaluation of renal size and arterial anatomy in the living kidney donor.

    PubMed

    Janoff, Daniel M; Davol, Patrick; Hazzard, James; Lemmers, Michael J; Paduch, Darius A; Barry, John M

    2004-01-01

    Computerized tomography (CT) with 3-dimensional (3-D) reconstruction has gained acceptance as an imaging study to evaluate living renal donors. We report our experience with this technique in 199 consecutive patients to validate its predictions of arterial anatomy and kidney volumes. Between January 1997 and March 2002, 199 living donor nephrectomies were performed at our institution using an open technique. During the operation arterial anatomy was recorded as well as kidney weight in 98 patients and displacement volume in 27. Each donor had been evaluated preoperatively by CT angiography with 3-D reconstruction. Arterial anatomy described by a staff radiologist was compared with intraoperative findings. CT estimated volumes were reported. Linear correlation graphs were generated to assess the reliability of CT volume predictions. The accuracy of CT angiography for predicting arterial anatomy was 90.5%. However, as the number of renal arteries increased, predictive accuracy decreased. The ability of CT to predict multiple arteries remained high with a positive predictive value of 95.2%. Calculated CT volume and kidney weight significantly correlated (0.654). However, the coefficient of variation index (how much average CT volume differed from measured intraoperative volume) was 17.8%. CT angiography with 3-D reconstruction accurately predicts arterial vasculature in more than 90% of patients and it can be used to compare renal volumes. However, accuracy decreases with multiple renal arteries and volume comparisons may be inaccurate when the difference in kidney volumes is within 17.8%.

  14. Design on wireless auto-measurement system for lead rail straightness measurement based on PSD

    NASA Astrophysics Data System (ADS)

    Yan, Xiugang; Zhang, Shuqin; Dong, Dengfeng; Cheng, Zhi; Wu, Guanghua; Wang, Jie; Zhou, Weihu

    2016-10-01

    Straightness detection is not only one of the key technologies for the product quality and installation accuracy of all types of lead rail, but also an important dimensional measurement technology. The straightness measuring devices now available have disadvantages of low automation level, limiting by measuring environment, and low measurement efficiency. In this paper, a wireless measurement system for straightness detection based on position sensitive detector (PSD) is proposed. The system has some advantage of high automation-level, convenient, high measurement efficiency, easy to transplanting and expanding, and can detect straightness of lead rail in real-time.

  15. Response assessment in neuro-oncology.

    PubMed

    Quant, Eudocia C; Wen, Patrick Y

    2011-02-01

    Accuracy and reproducibility in determining response to therapy and tumor progression can be difficult to achieve for nervous system tumors. Current response criteria vary depending on the pathology and have several limitations. Until recently, the most widely used criteria for gliomas were "Macdonald criteria," based on two-dimensional tumor measurements on neuroimaging studies. However, the Response Assessment in Neuro-Oncology (RANO) Working Group has published new recommendations in high-grade gliomas and is working on recommendations for other nervous system tumors. This article reviews current response criteria for high-grade glioma, low-grade glioma, brain metastasis, meningioma, and schwannoma.

  16. BRDF-dependent accuracy of array-projection-based 3D sensors.

    PubMed

    Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther

    2017-03-10

    In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.

  17. IR Spectra of (HCOOH)2 and (DCOOH)2: Experiment, VSCF/VCI, and Ab Initio Molecular Dynamics Calculations Using Full-Dimensional Potential and Dipole Moment Surfaces.

    PubMed

    Qu, Chen; Bowman, Joel M

    2018-05-17

    We report quantum VSCF/VCI and ab initio molecular dynamics (AIMD) calculations of the IR spectra of (HCOOH) 2 and (DCOOH) 2 , using full-dimensional, ab initio potential energy and dipole moment surfaces (PES and DMS). These surfaces are fits, using permutationally invariant polynomials, to 13 475 ab initio CCSD(T)-F12a electronic energies and MP2 dipole moments. Here "AIMD" means using these ab initio potential and dipole moment surfaces in the MD calculations. The VSCF/VCI calculations use all (24) normal modes for coupling, with a four-mode representation of the potential. The quantum spectra align well with jet-cooled and room-temperature experimental spectra over the spectral range 600-3600 cm -1 . Analyses of the complex O-H and C-H stretch bands are made based on the mixing of the VSCF/VCI basis functions. The comparisons of the AIMD IR spectra with both experimental and VSCF/VCI ones provide tests of the accuracy of the AIMD approach. These indicate good accuracy for simple bands but not for the complex O-H stretch band, which is upshifted from experimental and VSCF/VCI bands by roughly 300 cm -1 . In addition to testing the AIMD approach, the PES, DMS, and VSCF/VCI calculations for formic acid dimer provide opportunities for testing other methods to represent high-dimensional data and other methods that perform postharmonic vibrational calculations.

  18. The measurement of an aspherical mirror by three-dimensional nanoprofiler

    NASA Astrophysics Data System (ADS)

    Tokuta, Yusuke; Okita, Kenya; Okuda, Kohei; Kitayama, Takao; Nakano, Motohiro; Nakatani, Shun; Kudo, Ryota; Yamamura, Kazuya; Endo, Katsuyoshi

    2015-09-01

    Aspherical optical elements with high accuracy are important in several fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Then the demand of measurement method for aspherical or free-form surface with nanometer resolution is rising. Our purpose is to develop a non-contact profiler to measure free-form surfaces directly with repeatability of figure error of less than 1 nm PV. To achieve this purpose we have developed three-dimensional Nanoprofiler which traces normal vectors of sample surface. The measurement principle is based on the straightness of LASER light and the accuracy of a rotational goniometer. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and LASER head at optically equal position. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and the coordinates by a reconstruction algorithm. To evaluate performance of this machine we measure a concave aspherical mirror ten times. From ten results we calculate measurement repeatability, and we evaluate measurement uncertainty to compare the result with that measured by an interferometer. In consequence, the repeatability of measurement was 2.90 nm (σ) and the difference between the two profiles was +/-20 nm. We conclude that the two profiles was correspondent considering systematic errors of each machine.

  19. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  20. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The functional equation truncation method for approximating slow invariant manifolds: a rapid method for computing intrinsic low-dimensional manifolds.

    PubMed

    Roussel, Marc R; Tang, Terry

    2006-12-07

    A slow manifold is a low-dimensional invariant manifold to which trajectories nearby are rapidly attracted on the way to the equilibrium point. The exact computation of the slow manifold simplifies the model without sacrificing accuracy on the slow time scales of the system. The Maas-Pope intrinsic low-dimensional manifold (ILDM) [Combust. Flame 88, 239 (1992)] is frequently used as an approximation to the slow manifold. This approximation is based on a linearized analysis of the differential equations and thus neglects curvature. We present here an efficient way to calculate an approximation equivalent to the ILDM. Our method, called functional equation truncation (FET), first develops a hierarchy of functional equations involving higher derivatives which can then be truncated at second-derivative terms to explicitly neglect the curvature. We prove that the ILDM and FET-approximated (FETA) manifolds are identical for the one-dimensional slow manifold of any planar system. In higher-dimensional spaces, the ILDM and FETA manifolds agree to numerical accuracy almost everywhere. Solution of the FET equations is, however, expected to generally be faster than the ILDM method.

  2. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  4. Spacecraft Attitude Tracking and Maneuver Using Combined Magnetic Actuators

    NASA Technical Reports Server (NTRS)

    Zhou, Zhiqiang

    2010-01-01

    The accuracy of spacecraft attitude control using magnetic actuators only is low and on the order of 0.4-5 degrees. The key reason is that the magnetic torque is two-dimensional and it is only in the plane perpendicular to the magnetic field vector. In this paper novel attitude control algorithms using the combination of magnetic actuators with Reaction Wheel Assembles (RWAs) or other types of actuators, such as thrusters, are presented. The combination of magnetic actuators with one or two RWAs aligned with different body axis expands the two-dimensional control torque to three-dimensional. The algorithms can guarantee the spacecraft attitude and rates to track the commanded attitude precisely. A design example is presented for Nadir pointing, pitch and yaw maneuvers. The results show that precise attitude tracking can be reached and the attitude control accuracy is comparable with RWAs based attitude control. The algorithms are also useful for the RWAs based attitude control. When there are only one or two workable RWAs due to RWA failures, the attitude control system can switch to the control algorithms for the combined magnetic actuators with the RWAs without going to the safe mode and the control accuracy can be maintained.

  5. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  6. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  7. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  8. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  9. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  10. A biomechanical modeling guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2017-03-01

    Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.

  11. Electromagnetic navigated positioning of the maxilla after Le Fort I osteotomy in preclinical orthognathic surgery cases.

    PubMed

    Berger, Moritz; Nova, Igor; Kallus, Sebastian; Ristow, Oliver; Eisenmann, Urs; Freudlsperger, Christian; Seeberger, Robin; Hoffmann, Jürgen; Dickhaus, Hartmut

    2017-03-01

    Inaccuracies in orthognathic surgery can be caused during face-bow registration, model surgery on plaster models, and intermaxillary splint manufacturing. Electromagnetic (EM) navigation is a promising method for splintless digitized maxillary positioning. After performing Le Fort I osteotomy on 10 plastic skulls, the target position of the maxilla was guided by an EM navigation system. Specially implemented software illustrated the target position by real-time multistage colored three-dimensional imaging. Accuracy was determined by using pre- and postoperative cone beam computed tomography. The high accuracy of the EM system was underlined by the fact that it had a navigated maxilla position discrepancy of only 0.4 mm, which was verified by postoperative cone beam computed tomography. This preclinical study demonstrates a precise digitized approach for splintless maxillary repositioning after Le Fort I osteotomy. The accuracy and intuitive illustration of the introduced EM navigation system is promising for potential daily use in orthognathic surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Normal tissue toxicity after small field hypofractionated stereotactic body radiation.

    PubMed

    Milano, Michael T; Constine, Louis S; Okunieff, Paul

    2008-10-31

    Stereotactic body radiation (SBRT) is an emerging tool in radiation oncology in which the targeting accuracy is improved via the detection and processing of a three-dimensional coordinate system that is aligned to the target. With improved targeting accuracy, SBRT allows for the minimization of normal tissue volume exposed to high radiation dose as well as the escalation of fractional dose delivery. The goal of SBRT is to minimize toxicity while maximizing tumor control. This review will discuss the basic principles of SBRT, the radiobiology of hypofractionated radiation and the outcome from published clinical trials of SBRT, with a focus on late toxicity after SBRT. While clinical data has shown SBRT to be safe in most circumstances, more data is needed to refine the ideal dose-volume metrics.

  13. Analytical formulation of impulsive collision avoidance dynamics

    NASA Astrophysics Data System (ADS)

    Bombardelli, Claudio

    2014-02-01

    The paper deals with the problem of impulsive collision avoidance between two colliding objects in three dimensions and assuming elliptical Keplerian orbits. Closed-form analytical expressions are provided that accurately predict the relative dynamics of the two bodies in the encounter b-plane following an impulsive delta-V manoeuvre performed by one object at a given orbit location prior to the impact and with a generic three-dimensional orientation. After verifying the accuracy of the analytical expressions for different orbital eccentricities and encounter geometries the manoeuvre direction that maximises the miss distance is obtained numerically as a function of the arc length separation between the manoeuvre point and the predicted collision point. The provided formulas can be used for high-accuracy instantaneous estimation of the outcome of a generic impulsive collision avoidance manoeuvre and its optimisation.

  14. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  15. Homogeneity Pursuit

    PubMed Central

    Ke, Tracy; Fan, Jianqing; Wu, Yichao

    2014-01-01

    This paper explores the homogeneity of coefficients in high-dimensional regression, which extends the sparsity concept and is more general and suitable for many applications. Homogeneity arises when regression coefficients corresponding to neighboring geographical regions or a similar cluster of covariates are expected to be approximately the same. Sparsity corresponds to a special case of homogeneity with a large cluster of known atom zero. In this article, we propose a new method called clustering algorithm in regression via data-driven segmentation (CARDS) to explore homogeneity. New mathematics are provided on the gain that can be achieved by exploring homogeneity. Statistical properties of two versions of CARDS are analyzed. In particular, the asymptotic normality of our proposed CARDS estimator is established, which reveals better estimation accuracy for homogeneous parameters than that without homogeneity exploration. When our methods are combined with sparsity exploration, further efficiency can be achieved beyond the exploration of sparsity alone. This provides additional insights into the power of exploring low-dimensional structures in high-dimensional regression: homogeneity and sparsity. Our results also shed lights on the properties of the fussed Lasso. The newly developed method is further illustrated by simulation studies and applications to real data. Supplementary materials for this article are available online. PMID:26085701

  16. Enhancing Three-dimensional Movement Control System for Assemblies of Machine-Building Facilities

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Andreeva, M. A.

    2018-01-01

    Aspects of enhancing three-dimensional movement control system are given in the paper. Such system is to be used while controlling assemblies of machine-building facilities, which is a relevant issue. The base of the system known is three-dimensional movement control device with optical principle of action. The device consists of multi point light emitter and light receiver matrix. The processing of signals is enhanced to increase accuracy of measurements by switching from discrete to analog signals. Light receiver matrix is divided into four areas, and the output value of each light emitter in each matrix area is proportional to its luminance level. Thus, determing output electric signal value of each light emitter in corresponding area leads to determing position of multipoint light emitter and position of object tracked. This is done by using Case-based reasoning method, the precedent in which is described as integral signal value of each matrix area, coordinates of light receivers, which luminance level is high, and decision to be made in this situation.

  17. Feature extraction based on semi-supervised kernel Marginal Fisher analysis and its application in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Xuan, Jianping; Shi, Tielin

    2013-12-01

    Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.

  18. Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble

    PubMed Central

    Liu, Hang; Chu, Renzhi; Tang, Zhenan

    2015-01-01

    Sensor drift is the most challenging problem in gas sensing at present. We propose a novel two-dimensional classifier ensemble strategy to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. This strategy is appropriate for multi-class classifiers that consist of combinations of pairwise classifiers, such as support vector machines. We compare the performance of the strategy with those of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the two-dimensional ensemble outperforms the other methods considered. Furthermore, we propose a pre-aging process inspired by that applied to the sensors to improve the stability of the classifier ensemble. The experimental results demonstrate that the weight of each multi-class classifier model in the ensemble remains fairly static before and after the addition of new classifier models to the ensemble, when a pre-aging procedure is applied. PMID:25942640

  19. Application of Tandem Two-Dimensional Mass Spectrometry for Top-Down Deep Sequencing of Calmodulin

    NASA Astrophysics Data System (ADS)

    Floris, Federico; Chiron, Lionel; Lynch, Alice M.; Barrow, Mark P.; Delsuc, Marc-André; O'Connor, Peter B.

    2018-06-01

    Two-dimensional mass spectrometry (2DMS) involves simultaneous acquisition of the fragmentation patterns of all the analytes in a mixture by correlating their precursor and fragment ions by modulating precursor ions systematically through a fragmentation zone. Tandem two-dimensional mass spectrometry (MS/2DMS) unites the ultra-high accuracy of Fourier transform ion cyclotron resonance (FT-ICR) MS/MS and the simultaneous data-independent fragmentation of 2DMS to achieve extensive inter-residue fragmentation of entire proteins. 2DMS was recently developed for top-down proteomics (TDP), and applied to the analysis of calmodulin (CaM), reporting a cleavage coverage of about 23% using infrared multiphoton dissociation (IRMPD) as fragmentation technique. The goal of this work is to expand the utility of top-down protein analysis using MS/2DMS in order to extend the cleavage coverage in top-down proteomics further into the interior regions of the protein. In this case, using MS/2DMS, the cleavage coverage of CaM increased from 23% to 42%.

  20. Support Vector Machine-Based Endmember Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippi, Anthony M; Archibald, Richard K

    Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less

  1. Numerical study of low-frequency discharge oscillations in a 5 kW Hall thruster

    NASA Astrophysics Data System (ADS)

    Le, YANG; Tianping, ZHANG; Juanjuan, CHEN; Yanhui, JIA

    2018-07-01

    A two-dimensional particle-in-cell plasma model is built in the R–Z plane to investigate the low-frequency plasma oscillations in the discharge channel of a 5 kW LHT-140 Hall thruster. In addition to the elastic, excitation, and ionization collisions between neutral atoms and electrons, the Coulomb collisions between electrons and electrons and between electrons and ions are analyzed. The sheath characteristic distortion is also corrected. Simulation results indicate the capability of the built model to reproduce the low-frequency oscillation with high accuracy. The oscillations of the discharge current and ion density produced by the model are consistent with the existing conclusions. The model predicts a frequency that is consistent with that calculated by the zero-dimensional theoretical model.

  2. Fuzzy Regression Prediction and Application Based on Multi-Dimensional Factors of Freight Volume

    NASA Astrophysics Data System (ADS)

    Xiao, Mengting; Li, Cheng

    2018-01-01

    Based on the reality of the development of air cargo, the multi-dimensional fuzzy regression method is used to determine the influencing factors, and the three most important influencing factors of GDP, total fixed assets investment and regular flight route mileage are determined. The system’s viewpoints and analogy methods, the use of fuzzy numbers and multiple regression methods to predict the civil aviation cargo volume. In comparison with the 13th Five-Year Plan for China’s Civil Aviation Development (2016-2020), it is proved that this method can effectively improve the accuracy of forecasting and reduce the risk of forecasting. It is proved that this model predicts civil aviation freight volume of the feasibility, has a high practical significance and practical operation.

  3. Three-Dimensional Incompressible Navier-Stokes Flow Computations about Complete Configurations Using a Multiblock Unstructured Grid Approach

    NASA Technical Reports Server (NTRS)

    Sheng, Chunhua; Hyams, Daniel G.; Sreenivas, Kidambi; Gaither, J. Adam; Marcum, David L.; Whitfield, David L.

    2000-01-01

    A multiblock unstructured grid approach is presented for solving three-dimensional incompressible inviscid and viscous turbulent flows about complete configurations. The artificial compressibility form of the governing equations is solved by a node-based, finite volume implicit scheme which uses a backward Euler time discretization. Point Gauss-Seidel relaxations are used to solve the linear system of equations at each time step. This work employs a multiblock strategy to the solution procedure, which greatly improves the efficiency of the algorithm by significantly reducing the memory requirements by a factor of 5 over the single-grid algorithm while maintaining a similar convergence behavior. The numerical accuracy of solutions is assessed by comparing with the experimental data for a submarine with stem appendages and a high-lift configuration.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang; Qin, Hong; Liu, Jian

    An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon–matter interactions described by the Schrödinger–Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. Here, this new numerical capability enables us to carry out first-principle based simulation study of important photon–matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

  5. A Discontinuous Galerkin Finite Element Method for Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Hu, Changqing; Shu, Chi-Wang

    1998-01-01

    In this paper, we present a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the Runge-Kutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method.

  6. Multigrid solution of internal flows using unstructured solution adaptive meshes

    NASA Technical Reports Server (NTRS)

    Smith, Wayne A.; Blake, Kenneth R.

    1992-01-01

    This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.

  7. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  8. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  9. Problems and Limitations of Satellite Image Orientation for Determination of Height Models

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2017-05-01

    The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.

  10. A robust omnifont open-vocabulary Arabic OCR system using pseudo-2D-HMM

    NASA Astrophysics Data System (ADS)

    Rashwan, Abdullah M.; Rashwan, Mohsen A.; Abdel-Hameed, Ahmed; Abdou, Sherif; Khalil, A. H.

    2012-01-01

    Recognizing old documents is highly desirable since the demand for quickly searching millions of archived documents has recently increased. Using Hidden Markov Models (HMMs) has been proven to be a good solution to tackle the main problems of recognizing typewritten Arabic characters. These attempts however achieved a remarkable success for omnifont OCR under very favorable conditions, they didn't achieve the same performance in practical conditions, i.e. noisy documents. In this paper we present an omnifont, large-vocabulary Arabic OCR system using Pseudo Two Dimensional Hidden Markov Model (P2DHMM), which is a generalization of the HMM. P2DHMM offers a more efficient way to model the Arabic characters, such model offer both minimal dependency on the font size/style (omnifont), and high level of robustness against noise. The evaluation results of this system are very promising compared to a baseline HMM system and best OCRs available in the market (Sakhr and NovoDynamics). The recognition accuracy of the P2DHMM classifier is measured against the classic HMM classifier, the average word accuracy rates for P2DHMM and HMM classifiers are 79% and 66% respectively. The overall system accuracy is measured against Sakhr and NovoDynamics OCR systems, the average word accuracy rates for P2DHMM, NovoDynamics, and Sakhr are 74%, 71%, and 61% respectively.

  11. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    NASA Astrophysics Data System (ADS)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  12. Development of a high performance surface slope measuring system for two-dimensional mapping of x-ray optics

    NASA Astrophysics Data System (ADS)

    Lacey, Ian; Adam, Jérôme; Centers, Gary P.; Gevorkyan, Gevork S.; Nikitin, Sergey M.; Smith, Brian V.; Yashchuk, Valeriy V.

    2017-09-01

    The research and development work on the Advanced Light Source (ALS) upgrade to a diffraction limited storage ring light source, ALS-U, has brought to focus the need for near-perfect x-ray optics, capable of delivering light to experiments without significant degradation of brightness and coherence. The desired surface quality is characterized with residual (after subtraction of an ideal shape) surface slope and height errors of <50-100 nrad (rms) and <1-2 nm (rms), respectively. The ex-situ metrology that supports the optimal usage of the optics at the beamlines has to offer even higher measurement accuracy. At the ALS X-Ray Optics Laboratory, we are developing a new surface slope profiler, the Optical Surface Measuring System (OSMS), capable of two-dimensional (2D) surface-slope metrology at an absolute accuracy below the above optical specification. In this article we provide the results of comprehensive characterization of the key elements of the OSMS, a NOM-like high-precision granite gantry system with air-bearing translation and a custom-made precision air-bearing stage for tilting and flipping the surface under test. We show that the high performance of the gantry system allows implementing an original scanning mode for 2D mapping. We demonstrate the efficiency of the developed 2D mapping via comparison with 1D slope measurements performed with the same hyperbolic test mirror using the ALS developmental long trace profiler. The details of the OSMS design and the developed measuring techniques are also provided.

  13. Optimizing spectral resolutions for the classification of C3 and C4 grass species, using wavelengths of known absorption features

    NASA Astrophysics Data System (ADS)

    Adjorlolo, Clement; Cho, Moses A.; Mutanga, Onisimo; Ismail, Riyad

    2012-01-01

    Hyperspectral remote-sensing approaches are suitable for detection of the differences in 3-carbon (C3) and four carbon (C4) grass species phenology and composition. However, the application of hyperspectral sensors to vegetation has been hampered by high-dimensionality, spectral redundancy, and multicollinearity problems. In this experiment, resampling of hyperspectral data to wider wavelength intervals, around a few band-centers, sensitive to the biophysical and biochemical properties of C3 or C4 grass species is proposed. The approach accounts for an inherent property of vegetation spectral response: the asymmetrical nature of the inter-band correlations between a waveband and its shorter- and longer-wavelength neighbors. It involves constructing a curve of weighting threshold of correlation (Pearson's r) between a chosen band-center and its neighbors, as a function of wavelength. In addition, data were resampled to some multispectral sensors-ASTER, GeoEye-1, IKONOS, QuickBird, RapidEye, SPOT 5, and WorldView-2 satellites-for comparative purposes, with the proposed method. The resulting datasets were analyzed, using the random forest algorithm. The proposed resampling method achieved improved classification accuracy (κ=0.82), compared to the resampled multispectral datasets (κ=0.78, 0.65, 0.62, 0.59, 0.65, 0.62, 0.76, respectively). Overall, results from this study demonstrated that spectral resolutions for C3 and C4 grasses can be optimized and controlled for high dimensionality and multicollinearity problems, yet yielding high classification accuracies. The findings also provide a sound basis for programming wavebands for future sensors.

  14. Evaluating a Novel 3D Stereoscopic Visual Display for Transanal Endoscopic Surgery: A Randomized Controlled Crossover Study.

    PubMed

    Di Marco, Aimee N; Jeyakumar, Jenifa; Pratt, Philip J; Yang, Guang-Zhong; Darzi, Ara W

    2016-01-01

    To compare surgical performance with transanal endoscopic surgery (TES) using a novel 3-dimensional (3D) stereoscopic viewer against the current modalities of a 3D stereoendoscope, 3D, and 2-dimensional (2D) high-definition monitors. TES is accepted as the primary treatment for selected rectal tumors. Current TES systems offer a 2D monitor, or 3D image, viewed directly via a stereoendoscope, necessitating an uncomfortable operating position. To address this and provide a platform for future image augmentation, a 3D stereoscopic display was created. Forty participants, of mixed experience level, completed a simulated TES task using 4 visual displays (novel stereoscopic viewer and currently utilized stereoendoscope, 3D, and 2D high-definition monitors) in a randomly allocated order. Primary outcome measures were: time taken, path length, and accuracy. Secondary outcomes were: task workload and participant questionnaire results. Median time taken and path length were significantly shorter for the novel viewer versus 2D and 3D, and not significantly different to the traditional stereoendoscope. Significant differences were found in accuracy, task workload, and questionnaire assessment in favor of the novel viewer, as compared to all 3 modalities. This novel 3D stereoscopic viewer allows surgical performance in TES equivalent to that achieved using the current stereoendoscope and superior to standard 2D and 3D displays, but with lower physical and mental demands for the surgeon. Participants expressed a preference for this system, ranking it more highly on a questionnaire. Clinical translation of this work has begun with the novel viewer being used in 5 TES patients.

  15. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2014-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-inch chord, 2-D straight wing with NACA 23012 airfoil section. For six ice accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 x 10(exp 6) and a Mach number of 0.18 with an 18-inch chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For four of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3% with corresponding differences in stall angle of approximately one degree or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice-accretion cases tested, the aerodynamics is known to depend upon the small, three dimensional features of the ice. These data show that the laser-scan and rapid-prototype manufacturing approach is capable of replicating these ice features within the reported accuracies of the laser-scan measurement and rapid-prototyping method; thus providing a new capability for high-fidelity ice-accretion documentation and artificial ice-shape fabrication for icing research.

  16. Reappraisal of Pediatric Diastatic Skull Fractures in the 3-Dimensional CT Era: Clinical Characteristics and Comparison of Diagnostic Accuracy of Simple Skull X-Ray, 2-Dimensional CT, and 3-Dimensional CT.

    PubMed

    Sim, Sook Young; Kim, Hyun Gi; Yoon, Soo Han; Choi, Jong Wook; Cho, Sung Min; Choi, Mi Sun

    2017-12-01

    Diastatic skull fractures (DSFs) in children are difficult to detect in skull radiographs before they develop into growing skull fractures; therefore, little information is available on this topic. However, recent advances in 3-dimensional (3D) computed tomography (CT) imaging technology have enabled more accurate diagnoses of almost all forms of skull fracture. The present study was undertaken to document the clinical characteristics of DSFs in children and to determine whether 3D CT enhances diagnostic accuracy. Two hundred and ninety-two children younger than 12 years with skull fractures underwent simple skull radiography, 2-dimensional (2D) CT, and 3DCT. Results were compared with respect to fracture type, location, associated lesions, and accuracy of diagnosis. DSFs were diagnosed in 44 (15.7%) of children with skull fractures. Twenty-two patients had DSFs only, and the other 22 had DSFs combined with compound or mixed skull fractures. The most common fracture locations were the occipitomastoid (25%) and lambdoid (15.9%). Accompanying lesions consisted of subgaleal hemorrhages (42/44), epidural hemorrhages (32/44), pneumocephalus (17/44), and subdural hemorrhages (3/44). A total of 17 surgical procedures were performed on 15 of the 44 patients. Fourteen and 19 patients were confirmed to have DSFs by skull radiography and 2D CT, respectively, but 3D CT detected DSFs in 43 of the 44 children (P < 0.001). 3D CT was found to be markedly superior to skull radiography or 2D CT for detecting DSFs. This finding indicates that 3D CT should be used routinely rather than 2D CT for the assessment of pediatric head trauma. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Positivity-preserving cell-centered Lagrangian schemes for multi-material compressible flows: From first-order to high-orders. Part I: The one-dimensional case

    NASA Astrophysics Data System (ADS)

    Vilar, François; Shu, Chi-Wang; Maire, Pierre-Henri

    2016-05-01

    One of the main issues in the field of numerical schemes is to ally robustness with accuracy. Considering gas dynamics, numerical approximations may generate negative density or pressure, which may lead to nonlinear instability and crash of the code. This phenomenon is even more critical using a Lagrangian formalism, the grid moving and being deformed during the calculation. Furthermore, most of the problems studied in this framework contain very intense rarefaction and shock waves. In this paper, the admissibility of numerical solutions obtained by high-order finite-volume-scheme-based methods, such as the discontinuous Galerkin (DG) method, the essentially non-oscillatory (ENO) and the weighted ENO (WENO) finite volume schemes, is addressed in the one-dimensional Lagrangian gas dynamics framework. After briefly recalling how to derive Lagrangian forms of the 1D gas dynamics system of equations, a discussion on positivity-preserving approximate Riemann solvers, ensuring first-order finite volume schemes to be positive, is then given. This study is conducted for both ideal gas and non-ideal gas equations of state (EOS), such as the Jones-Wilkins-Lee (JWL) EOS or the Mie-Grüneisen (MG) EOS, and relies on two different techniques: either a particular definition of the local approximation of the acoustic impedances arising from the approximate Riemann solver, or an additional time step constraint relative to the cell volume variation. Then, making use of the work presented in [89,90,22], this positivity study is extended to high-orders of accuracy, where new time step constraints are obtained, and proper limitation is required. Through this new procedure, scheme robustness is highly improved and hence new problems can be tackled. Numerical results are provided to demonstrate the effectiveness of these methods. This paper is the first part of a series of two. The whole analysis presented here is extended to the two-dimensional case in [85], and proves to fit a wide range of numerical schemes in the literature, such as those presented in [19,64,15,82,84].

  18. Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect

    PubMed Central

    Bush, Keith A.; Inman, Cory S.; Hamann, Stephan; Kilts, Clinton D.; James, G. Andrew

    2017-01-01

    Recent evidence suggests that emotions have a distributed neural representation, which has significant implications for our understanding of the mechanisms underlying emotion regulation and dysregulation as well as the potential targets available for neuromodulation-based emotion therapeutics. This work adds to this evidence by testing the distribution of neural representations underlying the affective dimensions of valence and arousal using representational models that vary in both the degree and the nature of their distribution. We used multi-voxel pattern classification (MVPC) to identify whole-brain patterns of functional magnetic resonance imaging (fMRI)-derived neural activations that reliably predicted dimensional properties of affect (valence and arousal) for visual stimuli viewed by a normative sample (n = 32) of demographically diverse, healthy adults. Inter-subject leave-one-out cross-validation showed whole-brain MVPC significantly predicted (p < 0.001) binarized normative ratings of valence (positive vs. negative, 59% accuracy) and arousal (high vs. low, 56% accuracy). We also conducted group-level univariate general linear modeling (GLM) analyses to identify brain regions whose response significantly differed for the contrasts of positive versus negative valence or high versus low arousal. Multivoxel pattern classifiers using voxels drawn from all identified regions of interest (all-ROIs) exhibited mixed performance; arousal was predicted significantly better than chance but worse than the whole-brain classifier, whereas valence was not predicted significantly better than chance. Multivoxel classifiers derived using individual ROIs generally performed no better than chance. Although performance of the all-ROI classifier improved with larger ROIs (generated by relaxing the clustering threshold), performance was still poorer than the whole-brain classifier. These findings support a highly distributed model of neural processing for the affective dimensions of valence and arousal. Finally, joint error analyses of the MVPC hyperplanes encoding valence and arousal identified regions within the dimensional affect space where multivoxel classifiers exhibited the greatest difficulty encoding brain states – specifically, stimuli of moderate arousal and high or low valence. In conclusion, we highlight new directions for characterizing affective processing for mechanistic and therapeutic applications in affective neuroscience. PMID:28959198

  19. Dimensional accuracy of resultant casts made by a monophase, one-step and two-step, and a novel two-step putty/light-body impression technique: an in vitro study.

    PubMed

    Caputi, Sergio; Varvara, Giuseppe

    2008-04-01

    Dimensional accuracy when making impressions is crucial to the quality of fixed prosthodontic treatment, and the impression technique is a critical factor affecting this accuracy. The purpose of this in vitro study was to compare the dimensional accuracy of a monophase, 1- and 2-step putty/light-body, and a novel 2-step injection impression technique. A stainless steel model with 2 abutment preparations was fabricated, and impressions were made 15 times with each technique. All impressions were made with an addition-reaction silicone impression material (Aquasil) and a stock perforated metal tray. The monophase impressions were made with regular body material. The 1-step putty/light-body impressions were made with simultaneous use of putty and light-body materials. The 2-step putty/light-body impressions were made with 2-mm-thick resin-prefabricated copings. The 2-step injection impressions were made with simultaneous use of putty and light-body materials. In this injection technique, after removing the preliminary impression, a hole was made through the polymerized material at each abutment edge, to coincide with holes present in the stock trays. Extra-light-body material was then added to the preliminary impression and further injected through the hole after reinsertion of the preliminary impression on the stainless steel model. The accuracy of the 4 different impression techniques was assessed by measuring 3 dimensions (intra- and interabutment) (5-mum accuracy) on stone casts poured from the impressions of the stainless steel model. The data were analyzed by 1-way ANOVA and Student-Newman-Keuls test (alpha=.05). The stone dies obtained with all the techniques had significantly larger dimensions as compared to those of the stainless steel model (P<.01). The order for highest to lowest deviation from the stainless steel model was: monophase, 1-step putty/light body, 2-step putty/light body, and 2-step injection. Significant differences among all of the groups for both absolute dimensions of the stone dies, and their percent deviations from the stainless steel model (P<.01), were noted. The 2-step putty/light-body and 2-step injection techniques were the most dimensionally accurate impression methods in terms of resultant casts.

  20. Three-Dimensional High-Order Spectral Volume Method for Solving Maxwell's Equations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.

    2004-01-01

    A three-dimensional, high-order, conservative, and efficient discontinuous spectral volume (SV) method for the solutions of Maxwell's equations on unstructured grids is presented. The concept of discontinuous 2nd high-order loca1 representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) method, but instead of using a Galerkin finite-element formulation, the SV method is based on a finite-volume approach to attain a simpler formulation. Conventional unstructured finite-volume methods require data reconstruction based on the least-squares formulation using neighboring cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In the SV method, one starts with a relatively coarse grid of triangles or tetrahedra, called spectral volumes (SVs), and partition each SV into a number of structured subcells, called control volumes (CVs), that support a polynomial expansion of a desired degree of precision. The unknowns are cell averages over CVs. If all the SVs are partitioned in a geometrically similar manner, the reconstruction becomes universal as a weighted sum of unknowns, and only a few universal coefficients need to be stored for the surface integrals over CV faces. Since the solution is discontinuous across the SV boundaries, a Riemann solver is thus necessary to maintain conservation. In the paper, multi-parameter and symmetric SV partitions, up to quartic for triangle and cubic for tetrahedron, are first presented. The corresponding weight coefficients for CV face integrals in terms of CV cell averages for each partition are analytically determined. These discretization formulas are then applied to the integral form of the Maxwell equations. All numerical procedures for outer boundary, material interface, zonal interface, and interior SV face are unified with a single characteristic formulation. The load balancing in a massive parallel computing environment is therefore easier to achieve. A parameter is introduced in the Riemann solver to control the strength of the smoothing term. Important aspects of the data structure and its effects to communication and the optimum use of cache memory are discussed. Results will be presented for plane TE and TM waves incident on a perfectly conducting cylinder for up to fifth order of accuracy, and a plane wave incident on a perfectly conducting sphere for up to fourth order of accuracy. Comparisons are made with exact solutions for these cases.

  1. Relation between number of component views and accuracy of left ventricular mass determined by three-dimensional echocardiography.

    PubMed

    Chuang, Michael L; Salton, Carol J; Hibberd, Mark G; Manning, Warren J; Douglas, Pamela S

    2007-05-01

    Three-dimensional echocardiography (3DE) allows the accurate determination of left ventricular (LV) mass, but the optimal number of component or extracted 2-dimensional (2D) image planes that should be used to calculate LV mass is not known. This study was performed to determine the relation between the number of 2D image planes used for 3DE and the accuracy of LV mass, using cardiovascular magnetic resonance (CMR) imaging as the reference standard. Three-dimensional echocardiography data sets were analyzed using 4, 6, 8, 10 and 20 component 2D planes as well as biplane 2D echocardiography and CMR in 25 subjects with a variety of LV pathologies. Repeated-measures analysis of variance and the Bland-Altman method were used to compare measures of LV mass. To further assess the potential clinical impact of reducing the number of component image planes used for 3DE, the number of discrepancies between CMR and each of the 3DE estimates of LV mass at prespecified levels (i.e., > or =5%, > or =10%, and > or =20% difference from CMR LV mass) was tabulated. The mean LV mass by magnetic resonance imaging was 177 +/- 56 g (range 91 to 316). Biplane 2-dimensional echocardiography significantly underestimated CMR LV mass (p <0.05), but LV mass by 3DE was not statistically different from that by CMR regardless of the number of planes used. However, error variability and Bland-Altman 95% confidence intervals decreased with the use of additional image planes. In conclusion, transthoracic 3DE measures LV mass more accurately than biplane 2-dimensional echocardiography when > or =6 component 2D image planes are used. The use of >6 planes further increases the accuracy of 3DE, but at the cost of greater analysis time and potentially increased scanning times.

  2. K-nearest neighbors based methods for identification of different gear crack levels under different motor speeds and loads: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2016-03-01

    Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.

  3. Engineering of charge carriers via a two-dimensional heterostructure to enhance the thermoelectric figure of merit.

    PubMed

    Ding, Guangqian; Wang, Cong; Gao, Guoying; Yao, Kailun; Dun, Chaochao; Feng, Chunbao; Li, Dengfeng; Zhang, Gang

    2018-04-19

    High band degeneracy and glassy phonon transport are two remarkable features of highly efficient thermoelectric (TE) materials. The former promotes the power factor, while the latter aims to break the lower limit of lattice thermal conductivity through phonon scattering. Herein, we use the unique possibility offered by a two-dimensional superlattice-monolayer structure (SLM) to engineer the band degeneracy, charge density and phonon spectrum to maximize the thermoelectric figure of merit (ZT). First-principles calculations with Boltzmann transport equations reveal that the conduction bands of ZrSe2/HfSe2 SLM possess a highly degenerate level which gives a high n-type power factor; at the same time, the stair-like density of states yields a high Seebeck coefficient. These characteristics are absent in the individual monolayers. In addition, the SLM shows a suppressed lattice thermal conductivity along the superlattice period as phonons are effectively scattered by the interfaces. An intrinsic ZT of 5.3 (300 K) is achieved in n-type SLM, and it is 3.2 in the p-type counterpart. Compared with the theoretical predictions calculated with the same level of accuracy, these values are at least four-fold higher than those in the two parent materials, monolayer ZrSe2 and HfSe2. Our results provide a new strategy for the maximum thermoelectric performance, and clearly demonstrate the advantage of two-dimensional material heterostructures in the application of renewable energy.

  4. One-dimensional high-order compact method for solving Euler's equations

    NASA Astrophysics Data System (ADS)

    Mohamad, M. A. H.; Basri, S.; Basuno, B.

    2012-06-01

    In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.

  5. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    PubMed

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.

  6. Performance analysis and evaluation of direct phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhao, Ping; Gao, Nan; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2018-04-01

    Three-dimensional (3D) shape measurement of specular objects plays an important role in intelligent manufacturing applications. Phase measuring deflectometry (PMD)-based methods are widely used to obtain the 3D shapes of specular surfaces because they offer the advantages of a large dynamic range, high measurement accuracy, full-field and noncontact operation, and automatic data processing. To enable measurement of specular objects with discontinuous and/or isolated surfaces, a direct PMD (DPMD) method has been developed to build a direct relationship between phase and depth. In this paper, a new virtual measurement system is presented and is used to optimize the system parameters and evaluate the system's performance in DPMD applications. Four system parameters are analyzed to obtain accurate measurement results. Experiments are performed using simulated and actual data and the results confirm the effects of these four parameters on the measurement results. Researchers can therefore select suitable system parameters for actual DPMD (including PMD) measurement systems to obtain the 3D shapes of specular objects with high accuracy.

  7. Two-dimensional flow nanometry of biological nanoparticles for accurate determination of their size and emission intensity

    NASA Astrophysics Data System (ADS)

    Block, Stephan; Fast, Björn Johansson; Lundgren, Anders; Zhdanov, Vladimir P.; Höök, Fredrik

    2016-09-01

    Biological nanoparticles (BNPs) are of high interest due to their key role in various biological processes and use as biomarkers. BNP size and composition are decisive for their functions, but simultaneous determination of both properties with high accuracy remains challenging. Optical microscopy allows precise determination of fluorescence/scattering intensity, but not the size of individual BNPs. The latter is better determined by tracking their random motion in bulk, but the limited illumination volume for tracking this motion impedes reliable intensity determination. Here, we show that by attaching BNPs to a supported lipid bilayer, subjecting them to hydrodynamic flows and tracking their motion via surface-sensitive optical imaging enable determination of their diffusion coefficients and flow-induced drifts, from which accurate quantification of both BNP size and emission intensity can be made. For vesicles, the accuracy of this approach is demonstrated by resolving the expected radius-squared dependence of their fluorescence intensity for radii down to 15 nm.

  8. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  9. A Computational/Experimental Platform for Investigating Three-Dimensional Puzzle Solving of Comminuted Articular Fractures

    PubMed Central

    Thomas, Thaddeus P.; Anderson, Donald D.; Willis, Andrew R.; Liu, Pengcheng; Frank, Matthew C.; Marsh, J. Lawrence; Brown, Thomas D.

    2011-01-01

    Reconstructing highly comminuted articular fractures poses a difficult surgical challenge, akin to solving a complicated three-dimensional (3D) puzzle. Pre-operative planning using CT is critically important, given the desirability of less invasive surgical approaches. The goal of this work is to advance 3D puzzle solving methods toward use as a pre-operative tool for reconstructing these complex fractures. Methodology for generating typical fragmentation/dispersal patterns was developed. Five identical replicas of human distal tibia anatomy, were machined from blocks of high-density polyetherurethane foam (bone fragmentation surrogate), and were fractured using an instrumented drop tower. Pre- and post-fracture geometries were obtained using laser scans and CT. A semi-automatic virtual reconstruction computer program aligned fragment native (non-fracture) surfaces to a pre-fracture template. The tibias were precisely reconstructed with alignment accuracies ranging from 0.03-0.4mm. This novel technology has potential to significantly enhance surgical techniques for reconstructing comminuted intra-articular fractures, as illustrated for a representative clinical case. PMID:20924863

  10. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching

    PubMed Central

    Wang, Guohua; Liu, Qiong

    2015-01-01

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians’ head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians’ size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only. PMID:26703611

  11. Far-Infrared Based Pedestrian Detection for Driver-Assistance Systems Based on Candidate Filters, Gradient-Based Feature and Multi-Frame Approval Matching.

    PubMed

    Wang, Guohua; Liu, Qiong

    2015-12-21

    Far-infrared pedestrian detection approaches for advanced driver-assistance systems based on high-dimensional features fail to simultaneously achieve robust and real-time detection. We propose a robust and real-time pedestrian detection system characterized by novel candidate filters, novel pedestrian features and multi-frame approval matching in a coarse-to-fine fashion. Firstly, we design two filters based on the pedestrians' head and the road to select the candidates after applying a pedestrian segmentation algorithm to reduce false alarms. Secondly, we propose a novel feature encapsulating both the relationship of oriented gradient distribution and the code of oriented gradient to deal with the enormous variance in pedestrians' size and appearance. Thirdly, we introduce a multi-frame approval matching approach utilizing the spatiotemporal continuity of pedestrians to increase the detection rate. Large-scale experiments indicate that the system works in real time and the accuracy has improved about 9% compared with approaches based on high-dimensional features only.

  12. Combined two-dimensional velocity and temperature measurements of natural convection using a high-speed camera and temperature-sensitive particles

    NASA Astrophysics Data System (ADS)

    Someya, Satoshi; Li, Yanrong; Ishii, Keiko; Okamoto, Koji

    2011-01-01

    This paper proposes a combined method for two-dimensional temperature and velocity measurements in liquid and gas flows using temperature-sensitive particles (TSPs), a pulsed ultraviolet laser, and a high-speed camera. TSPs respond to temperature changes in the flow and can also serve as tracers for the velocity field. The luminescence from the TSPs was recorded at 15,000 frames per second as sequential images for a lifetime-based temperature analysis. These images were also used for the particle image velocimetry calculations. The temperature field was estimated using several images, based on the lifetime method. The decay curves for various temperature conditions fit well to exponential functions, and from these the decay constants at each temperature were obtained. The proposed technique was applied to measure the temperature and velocity fields in natural convection driven by a Marangoni force and buoyancy in a rectangular tank. The accuracy of the temperature measurement of the proposed technique was ±0.35-0.40°C.

  13. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  14. Direct reconstruction in CT-analogous pharmacokinetic diffuse fluorescence tomography: two-dimensional simulative and experimental validations

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Zhang, Yanqi; Zhang, Limin; Li, Jiao; Zhou, Zhongxing; Zhao, Huijuan; Gao, Feng

    2016-04-01

    We present a generalized strategy for direct reconstruction in pharmacokinetic diffuse fluorescence tomography (DFT) with CT-analogous scanning mode, which can accomplish one-step reconstruction of the indocyanine-green pharmacokinetic-rate images within in vivo small animals by incorporating the compartmental kinetic model into an adaptive extended Kalman filtering scheme and using an instantaneous sampling dataset. This scheme, compared with the established indirect and direct methods, eliminates the interim error of the DFT inversion and relaxes the expensive requirement of the instrument for obtaining highly time-resolved date-sets of complete 360 deg projections. The scheme is validated by two-dimensional simulations for the two-compartment model and pilot phantom experiments for the one-compartment model, suggesting that the proposed method can estimate the compartmental concentrations and the pharmacokinetic-rates simultaneously with a fair quantitative and localization accuracy, and is well suitable for cost-effective and dense-sampling instrumentation based on the highly-sensitive photon counting technique.

  15. A sparse structure learning algorithm for Gaussian Bayesian Network identification from high-dimensional data.

    PubMed

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2013-06-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.

  16. The Position Control of the Surface Motor with the Poles Distribution of Triangular Lattice

    NASA Astrophysics Data System (ADS)

    Watada, Masaya; Katsuyama, Norikazu; Ebihara, Daiki

    Recently, as for the machine tools or industrial robots, high performance, accuracy, etc. are demanded. Generally, when drive of many degrees of freedom is required in the machine tools or industrial robots, it has realized by using two or more motors. For example, two-dimensional positioning stages such as the X-Y plotter or the X-Y stage are enabling the two-dimensional drive by using each one motor in the direction of x, y. In order to use plural motors, these, however, have problems that equipment becomes large and complicate control system. From such problems, the Surface Motor (SFM) that can drive two directions by only one motor is researched. Authors have proposed SFM that considered wide range movement and the application to a curved surface. In this paper, the characteristics of the micro step drive by the open loop control are showed. Introduction of closed loop control for highly accurate positioning, moreover, is examined. The drive characteristics by each control are compared.

  17. A Sparse Structure Learning Algorithm for Gaussian Bayesian Network Identification from High-Dimensional Data

    PubMed Central

    Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric

    2014-01-01

    Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720

  18. Single-camera displacement field correlation method for centrosymmetric 3D dynamic deformation measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin

    2018-05-01

    Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.

  19. The effects of atmospheric refraction on the accuracy of laser ranging systems

    NASA Technical Reports Server (NTRS)

    Zanter, D. L.; Gardner, C. S.; Rao, N. N.

    1976-01-01

    Correction formulas derived by Saastamoinen and Marini, and the ray traces through the refractivity profiles all assume a spherically symmetric refractivity profile. The errors introduced by this assumption were investigated by ray tracing through three-dimensional profiles. The results of this investigation indicate that the difference between ray traces through the spherically symmetric and three-dimensional profiles is approximately three centimeters at 10 deg and decreases to less than one half of a centimeter at 80 deg. If the accuracy desired in future laser ranging systems is less than a few centimeters, Saastamoinen and Marini's formulas must be altered to account for the fact that the refractivity profile is not spherically symmetric.

  20. A multi-dimensional high-order DG-ALE method based on gas-kinetic theory with application to oscillating bodies

    NASA Astrophysics Data System (ADS)

    Ren, Xiaodong; Xu, Kun; Shyy, Wei

    2016-07-01

    This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.

  1. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  2. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection

    NASA Astrophysics Data System (ADS)

    Aytaç Korkmaz, Sevcan; Binol, Hamidullah

    2018-03-01

    Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.

  3. Application of Tandem Two-Dimensional Mass Spectrometry for Top-Down Deep Sequencing of Calmodulin.

    PubMed

    Floris, Federico; Chiron, Lionel; Lynch, Alice M; Barrow, Mark P; Delsuc, Marc-André; O'Connor, Peter B

    2018-06-04

    Two-dimensional mass spectrometry (2DMS) involves simultaneous acquisition of the fragmentation patterns of all the analytes in a mixture by correlating their precursor and fragment ions by modulating precursor ions systematically through a fragmentation zone. Tandem two-dimensional mass spectrometry (MS/2DMS) unites the ultra-high accuracy of Fourier transform ion cyclotron resonance (FT-ICR) MS/MS and the simultaneous data-independent fragmentation of 2DMS to achieve extensive inter-residue fragmentation of entire proteins. 2DMS was recently developed for top-down proteomics (TDP), and applied to the analysis of calmodulin (CaM), reporting a cleavage coverage of about ~23% using infrared multiphoton dissociation (IRMPD) as fragmentation technique. The goal of this work is to expand the utility of top-down protein analysis using MS/2DMS in order to extend the cleavage coverage in top-down proteomics further into the interior regions of the protein. In this case, using MS/2DMS, the cleavage coverage of CaM increased from ~23% to ~42%. Graphical Abstract Two-dimensional mass spectrometry, when applied to primary fragment ions from the source, allows deep-sequencing of the protein calmodulin.

  4. Extinction maps toward the Milky Way bulge: Two-dimensional and three-dimensional tests with apogee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultheis, M.; Zasowski, G.; Allende Prieto, C.

    Galactic interstellar extinction maps are powerful and necessary tools for Milky Way structure and stellar population analyses, particularly toward the heavily reddened bulge and in the midplane. However, due to the difficulty of obtaining reliable extinction measures and distances for a large number of stars that are independent of these maps, tests of their accuracy and systematics have been limited. Our goal is to assess a variety of photometric stellar extinction estimates, including both two-dimensional and three-dimensional extinction maps, using independent extinction measures based on a large spectroscopic sample of stars toward the Milky Way bulge. We employ stellar atmosphericmore » parameters derived from high-resolution H-band Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectra, combined with theoretical stellar isochrones, to calculate line-of-sight extinction and distances for a sample of more than 2400 giants toward the Milky Way bulge. We compare these extinction values to those predicted by individual near-IR and near+mid-IR stellar colors, two-dimensional bulge extinction maps, and three-dimensional extinction maps. The long baseline, near+mid-IR stellar colors are, on average, the most accurate predictors of the APOGEE extinction estimates, and the two-dimensional and three-dimensional extinction maps derived from different stellar populations along different sightlines show varying degrees of reliability. We present the results of all of the comparisons and discuss reasons for the observed discrepancies. We also demonstrate how the particular stellar atmospheric models adopted can have a strong impact on this type of analysis, and discuss related caveats.« less

  5. Evaluation of 3D metrology potential using a multiple detector CDSEM

    NASA Astrophysics Data System (ADS)

    Hakii, Hidemitsu; Yonekura, Isao; Nishiyama, Yasushi; Tanaka, Keishi; Komoto, Kenji; Murakawa, Tsutomu; Hiroyama, Mitsuo; Shida, Soichi; Kuribara, Masayuki; Iwai, Toshimichi; Matsumoto, Jun; Nakamura, Takayuki

    2012-06-01

    As feature sizes of semiconductor device structures have continuously decreased, needs for metrology tools with high precision and excellent linearity over actual pattern sizes have been growing. And it has become important to measure not only two-dimensional (2D) but also three-dimensional (3D) shapes of patterns at 22 nm node and beyond. To meet requirements for 3D metrology capabilities, various pattern metrology tools have been developed. Among those, we assume that CDSEM metrology is the most qualified candidate in the light of its non-destructive, high throughput measurement capabilities that are expected to be extended to the much-awaited 3D metrology technology. On the basis of this supposition, we have developed the 3D metrology system, in which side wall angles and heights of photomask patterns can be measured with high accuracy through analyzing CDSEM images generated by multi-channel detectors. In this paper, we will discuss our attempts to measure 3D shapes of defect patterns on a photomask by using Advantest's "Multi Vision Metrology SEM" E3630 (MVM-SEM' E3630).

  6. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  7. Virtual targeting in three-dimensional space with sound and light interference

    NASA Astrophysics Data System (ADS)

    Chua, Florence B.; DeMarco, Robert M.; Bergen, Michael T.; Short, Kenneth R.; Servatius, Richard J.

    2006-05-01

    Law enforcement and the military are critically concerned with the targeting and firing accuracy of opponents. Stimuli which impede opponent targeting and firing accuracy can be incorporated into defense systems. An automated virtual firing range was developed to assess human targeting accuracy under conditions of sound and light interference, while avoiding dangers associated with live fire. This system has the ability to quantify sound and light interference effects on targeting and firing accuracy in three dimensions. This was achieved by development of a hardware and software system that presents the subject with a sound or light target, preceded by a sound or light interference. SonyXplod. TM 4-way speakers present sound interference and sound targeting. The Martin ® MiniMAC TM Profile operates as a source of light interference, while a red laser light serves as a target. A tracking system was created to monitor toy gun movement and firing in three-dimensional space. Data are collected via the Ascension ® Flock of Birds TM tracking system and a custom National Instrument ® LabVIEW TM 7.0 program to monitor gun movement and firing. A test protocol examined system parameters. Results confirm that the system enables tracking of virtual shots from a fired simulation gun to determine shot accuracy and location in three dimensions.

  8. Design and Analysis of the Measurement Characteristics of a Bidirectional-Decoupling Over-Constrained Six-Dimensional Parallel-Mechanism Force Sensor

    PubMed Central

    Zhao, Tieshi; Zhao, Yanzhi; Hu, Qiangqiang; Ding, Shixing

    2017-01-01

    The measurement of large forces and the presence of errors due to dimensional coupling are significant challenges for multi-dimensional force sensors. To address these challenges, this paper proposes an over-constrained six-dimensional force sensor based on a parallel mechanism of steel ball structures as a measurement module. The steel ball structure can be subject to rolling friction instead of sliding friction, thus reducing the influence of friction. However, because the structure can only withstand unidirectional pressure, the application of steel balls in a six-dimensional force sensor is difficult. Accordingly, a new design of the sensor measurement structure was designed in this study. The static equilibrium and displacement compatibility equations of the sensor prototype’s over-constrained structure were established to obtain the transformation function, from which the forces in the measurement branches of the proposed sensor were then analytically derived. The sensor’s measurement characteristics were then analysed through numerical examples. Finally, these measurement characteristics were confirmed through calibration and application experiments. The measurement accuracy of the proposed sensor was determined to be 1.28%, with a maximum coupling error of 1.98%, indicating that the proposed sensor successfully overcomes the issues related to steel ball structures and provides sufficient accuracy. PMID:28867812

  9. Imaging of tumor hypermetabolism with near-infrared fluorescence contrast agents

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Zheng, Gang; Zhang, Zhihong; Blessington, Dana; Intes, Xavier; Achilefu, Samuel I.; Chance, Britton

    2004-08-01

    We have developed a high sensitivity near-infrared (NIR) optical imaging system for non-invasive cancer detection through molecular labeled fluorescent contrast agents. Near-infrared (NIR) imaging can probe tissue deeply thus possess the potential for non-invasively detection of breast or lymph node cancer. Recent developments in molecular beacons can selectively label various pre-cancer/cancer signatures and provide high tumor to background contrast. To increase the sensitivity in detecting fluorescent photons and the accuracy of localization, phase cancellation (in- and anti-phase) device is employed. This frequency-domain system utilizes the interference-like pattern of diffuse photon density wave to achieve high detection sensitivity and localization accuracy for the fluorescent heterogeneity embedded inside the scattering media. The opto-electronic system consists of the laser sources, fiber optics, interference filter to select the fluorescent photons and the high sensitivity photon detector (photomultiplier tube). The source-detector pair scans the tissue surface in multiple directions and the two-dimensional localization image can be obtained using goniometric reconstruction. In vivo measurements with tumor-bearing mouse model using the novel Cypate-mono-2-deoxy-glucose (Cypate-2-D-Glucosamide) fluorescent contrast agent, which targets the enhanced tumor glycolysis, demonstrated the feasibility on detection of 2 cm deep subsurface tumor in the tissue-like medium, with a localization accuracy within 2 ~ 3 mm. This instrument has the potential for tumor diagnosis and imaging, and the accuracy of the localization suggests that this system could help to guide the clinical fine-needle biopsy. This portable device would be complementary to X-ray mammogram and provide add-on information on early diagnosis and localization of early breast tumor.

  10. High Resolution Temperature Measurement of Liquid Stainless Steel Using Hyperspectral Imaging

    PubMed Central

    Devesse, Wim; De Baere, Dieter; Guillaume, Patrick

    2017-01-01

    A contactless temperature measurement system is presented based on a hyperspectral line camera that captures the spectra in the visible and near infrared (VNIR) region of a large set of closely spaced points. The measured spectra are used in a nonlinear least squares optimization routine to calculate a one-dimensional temperature profile with high spatial resolution. Measurements of a liquid melt pool of AISI 316L stainless steel show that the system is able to determine the absolute temperatures with an accuracy of 10%. The measurements are made with a spatial resolution of 12 µm/pixel, justifying its use in applications where high temperature measurements with high spatial detail are desired, such as in the laser material processing and additive manufacturing fields. PMID:28067764

  11. Children's Age-Related Speed--Accuracy Strategies in Intercepting Moving Targets in Two Dimensions

    ERIC Educational Resources Information Center

    Rothenberg-Cunningham, Alek; Newell, Karl M.

    2013-01-01

    Purpose: This study investigated the age-related speed--accuracy strategies of children, adolescents, and adults in performing a rapid striking task that allowed the self-selection of the interception position in a virtual, two-dimensional environment. Method: The moving target had curvilinear trajectories that were determined by combinations of…

  12. [Research progress of three-dimensional digital model for repair and reconstruction of knee joint].

    PubMed

    Tong, Lu; Li, Yanlin; Hu, Meng

    2013-01-01

    To review recent advance in the application and research of three-dimensional digital knee model. The recent original articles about three-dimensional digital knee model were extensively reviewed and analyzed. The digital three-dimensional knee model can simulate the knee complex anatomical structure very well. Based on this, there are some developments of new software and techniques, and good clinical results are achieved. With the development of computer techniques and software, the knee repair and reconstruction procedure has been improved, the operation will be more simple and its accuracy will be further improved.

  13. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  14. [Highly quality-controlled radiation therapy].

    PubMed

    Shirato, Hiroki

    2005-04-01

    Advanced radiation therapy for intracranial disease has focused on set-up accuracy for the past 15 years. However, quality control in the prescribed dose is actually as important as the tumor set-up in radiation therapy. Because of the complexity of the three-dimensional radiation treatment planning system in recent years, the highly quality-controlled prescription of the dose has now been reappraised as the mainstream to improve the treatment outcome of radiation therapy for intracranial disease. The Japanese Committee for Quality Control of Radiation Therapy has developed fundamental requirements such as a QC committee in each hospital, a medical physicist, dosimetrists (QC members), and an external audit.

  15. Development of a three-dimensional high-order strand-grids approach

    NASA Astrophysics Data System (ADS)

    Tong, Oisin

    Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.

  16. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    PubMed

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety.

  17. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions

    PubMed Central

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-01-01

    Abstract Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures. Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures. The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced. Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  18. Three-dimensional imaging of the craniofacial complex.

    PubMed

    Nguyen, Can X.; Nissanov, Jonathan; Öztürk, Cengizhan; Nuveen, Michiel J.; Tuncay, Orhan C.

    2000-02-01

    Orthodontic treatment requires the rearrangement of craniofacial complex elements in three planes of space, but oddly the diagnosis is done with two-dimensional images. Here we report on a three-dimensional (3D) imaging system that employs the stereoimaging method of structured light to capture the facial image. The images can be subsequently integrated with 3D cephalometric tracings derived from lateral and PA films (www.clinorthodres.com/cor-c-070). The accuracy of the reconstruction obtained with this inexpensive system is about 400 µ.

  19. Lightning Mapping With an Array of Fast Antennas

    NASA Astrophysics Data System (ADS)

    Wu, Ting; Wang, Daohong; Takagi, Nobuyuki

    2018-04-01

    Fast Antenna Lightning Mapping Array (FALMA), a low-frequency lightning mapping system comprising an array of fast antennas, was developed and established in Gifu, Japan, during the summer of 2017. Location results of two hybrid flashes and a cloud-to-ground flash comprising 11 return strokes (RSs) are described in detail in this paper. Results show that concurrent branches of stepped leaders can be readily resolved, and K changes and dart leaders with speeds up to 2.4 × 107 m/s are also well imaged. These results demonstrate that FALMA can reconstruct three-dimensional structures of lightning flashes with great details. Location accuracy of FALMA is estimated by comparing the located striking points of successive RSs in cloud-to-ground flashes. Results show that distances between successive RSs are mainly below 25 m, indicating exceptionally high location accuracy of FALMA.

  20. Inference of boundaries in causal sets

    NASA Astrophysics Data System (ADS)

    Cunningham, William J.

    2018-05-01

    We investigate the extrinsic geometry of causal sets in (1+1) -dimensional Minkowski spacetime. The properties of boundaries in an embedding space can be used not only to measure observables, but also to supplement the discrete action in the partition function via discretized Gibbons–Hawking–York boundary terms. We define several ways to represent a causal set using overlapping subsets, which then allows us to distinguish between null and non-null bounding hypersurfaces in an embedding space. We discuss algorithms to differentiate between different types of regions, consider when these distinctions are possible, and then apply the algorithms to several spacetime regions. Numerical results indicate the volumes of timelike boundaries can be measured to within 0.5% accuracy for flat boundaries and within 10% accuracy for highly curved boundaries for medium-sized causal sets with N  =  214 spacetime elements.

Top