Sample records for dimensionality reduction techniques

  1. Wing download reduction using vortex trapping plates

    NASA Technical Reports Server (NTRS)

    Light, Jeffrey S.; Stremel, Paul M.; Bilanin, Alan J.

    1994-01-01

    A download reduction technique using spanwise plates on the upper and lower wing surfaces has been examined. Experimental and analytical techniques were used to determine the download reduction obtained using this technique. Simple two-dimensional wind tunnel testing confirmed the validity of the technique for reducing two-dimensional airfoil drag. Computations using a two-dimensional Navier-Stokes analysis provided insight into the mechanism causing the drag reduction. Finally, the download reduction technique was tested using a rotor and wing to determine the benefits for a semispan configuration representative of a tilt rotor aircraft.

  2. Exploring the CAESAR database using dimensionality reduction techniques

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Raymer, Michael L.

    2012-06-01

    The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.

  3. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  4. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics

    NASA Astrophysics Data System (ADS)

    Wehmeyer, Christoph; Noé, Frank

    2018-06-01

    Inspired by the success of deep learning techniques in the physical and chemical sciences, we apply a modification of an autoencoder type deep neural network to the task of dimension reduction of molecular dynamics data. We can show that our time-lagged autoencoder reliably finds low-dimensional embeddings for high-dimensional feature spaces which capture the slow dynamics of the underlying stochastic processes—beyond the capabilities of linear dimension reduction techniques.

  5. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  6. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  7. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason L. Wright; Milos Manic

    2010-05-01

    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  8. Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds

    NASA Astrophysics Data System (ADS)

    Abdo, Mohammad Gamal Mohammad Mostafa

    This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).

  9. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  10. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.

    2011-04-01

    Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.

  12. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  13. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.

  14. Optimal dimensionality reduction of complex dynamics: the chess game as diffusion on a free-energy landscape.

    PubMed

    Krivov, Sergei V

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  15. Optimal dimensionality reduction of complex dynamics: The chess game as diffusion on a free-energy landscape

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  16. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Effects of band selection on endmember extraction for forestry applications

    NASA Astrophysics Data System (ADS)

    Karathanassi, Vassilia; Andreou, Charoula; Andronis, Vassilis; Kolokoussis, Polychronis

    2014-10-01

    In spectral unmixing theory, data reduction techniques play an important role as hyperspectral imagery contains an immense amount of data, posing many challenging problems such as data storage, computational efficiency, and the so called "curse of dimensionality". Feature extraction and feature selection are the two main approaches for dimensionality reduction. Feature extraction techniques are used for reducing the dimensionality of the hyperspectral data by applying transforms on hyperspectral data. Feature selection techniques retain the physical meaning of the data by selecting a set of bands from the input hyperspectral dataset, which mainly contain the information needed for spectral unmixing. Although feature selection techniques are well-known for their dimensionality reduction potentials they are rarely used in the unmixing process. The majority of the existing state-of-the-art dimensionality reduction methods set criteria to the spectral information, which is derived by the whole wavelength, in order to define the optimum spectral subspace. These criteria are not associated with any particular application but with the data statistics, such as correlation and entropy values. However, each application is associated with specific land c over materials, whose spectral characteristics present variations in specific wavelengths. In forestry for example, many applications focus on tree leaves, in which specific pigments such as chlorophyll, xanthophyll, etc. determine the wavelengths where tree species, diseases, etc., can be detected. For such applications, when the unmixing process is applied, the tree species, diseases, etc., are considered as the endmembers of interest. This paper focuses on investigating the effects of band selection on the endmember extraction by exploiting the information of the vegetation absorbance spectral zones. More precisely, it is explored whether endmember extraction can be optimized when specific sets of initial bands related to leaf spectral characteristics are selected. Experiments comprise application of well-known signal subspace estimation and endmember extraction methods on a hyperspectral imagery that presents a forest area. Evaluation of the extracted endmembers showed that more forest species can be extracted as endmembers using selected bands.

  18. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.

    PubMed

    Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.

  19. Complex Osteotomies of Tibial Plateau Malunions Using Computer-Assisted Planning and Patient-Specific Surgical Guides.

    PubMed

    Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P

    2015-08-01

    The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.

  20. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization

    PubMed Central

    Glaser, Joshua I.; Zamft, Bradley M.; Church, George M.; Kording, Konrad P.

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, “puzzle imaging,” that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. PMID:26192446

  1. Laser speckle reduction due to spatial and angular diversity introduced by fast scanning micromirror.

    PubMed

    Akram, M Nadeem; Tong, Zhaomin; Ouyang, Guangmin; Chen, Xuyuan; Kartashov, Vladimir

    2010-06-10

    We utilize spatial and angular diversity to achieve speckle reduction in laser illumination. Both free-space and imaging geometry configurations are considered. A fast two-dimensional scanning micromirror is employed to steer the laser beam. A simple experimental setup is built to demonstrate the application of our technique in a two-dimensional laser picture projection. Experimental results show that the speckle contrast factor can be reduced down to 5% within the integration time of the detector.

  2. Nonlinear Analysis and Modeling of Tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1996-01-01

    The objective of the study was to develop efficient modeling techniques and computational strategies for: (1) predicting the nonlinear response of tires subjected to inflation pressure, mechanical and thermal loads; (2) determining the footprint region, and analyzing the tire pavement contact problem, including the effect of friction; and (3) determining the sensitivity of the tire response (displacements, stresses, strain energy, contact pressures and contact area) to variations in the different material and geometric parameters. Two computational strategies were developed. In the first strategy the tire was modeled by using either a two-dimensional shear flexible mixed shell finite elements or a quasi-three-dimensional solid model. The contact conditions were incorporated into the formulation by using a perturbed Lagrangian approach. A number of model reduction techniques were applied to substantially reduce the number of degrees of freedom used in describing the response outside the contact region. The second strategy exploited the axial symmetry of the undeformed tire, and uses cylindrical coordinates in the development of three-dimensional elements for modeling each of the different parts of the tire cross section. Model reduction techniques are also used with this strategy.

  3. ODF Maxima Extraction in Spherical Harmonic Representation via Analytical Search Space Reduction

    PubMed Central

    Aganj, Iman; Lenglet, Christophe; Sapiro, Guillermo

    2015-01-01

    By revealing complex fiber structure through the orientation distribution function (ODF), q-ball imaging has recently become a popular reconstruction technique in diffusion-weighted MRI. In this paper, we propose an analytical dimension reduction approach to ODF maxima extraction. We show that by expressing the ODF, or any antipodally symmetric spherical function, in the common fourth order real and symmetric spherical harmonic basis, the maxima of the two-dimensional ODF lie on an analytically derived one-dimensional space, from which we can detect the ODF maxima. This method reduces the computational complexity of the maxima detection, without compromising the accuracy. We demonstrate the performance of our technique on both artificial and human brain data. PMID:20879302

  4. Application of diffusion maps to identify human factors of self-reported anomalies in aviation.

    PubMed

    Andrzejczak, Chris; Karwowski, Waldemar; Mikusinski, Piotr

    2012-01-01

    A study investigating what factors are present leading to pilots submitting voluntary anomaly reports regarding their flight performance was conducted. Diffusion Maps (DM) were selected as the method of choice for performing dimensionality reduction on text records for this study. Diffusion Maps have seen successful use in other domains such as image classification and pattern recognition. High-dimensionality data in the form of narrative text reports from the NASA Aviation Safety Reporting System (ASRS) were clustered and categorized by way of dimensionality reduction. Supervised analyses were performed to create a baseline document clustering system. Dimensionality reduction techniques identified concepts or keywords within records, and allowed the creation of a framework for an unsupervised document classification system. Results from the unsupervised clustering algorithm performed similarly to the supervised methods outlined in the study. The dimensionality reduction was performed on 100 of the most commonly occurring words within 126,000 text records describing commercial aviation incidents. This study demonstrates that unsupervised machine clustering and organization of incident reports is possible based on unbiased inputs. Findings from this study reinforced traditional views on what factors contribute to civil aviation anomalies, however, new associations between previously unrelated factors and conditions were also found.

  5. Wall effects in wind tunnels

    NASA Technical Reports Server (NTRS)

    Chevallier, J. P.; Vaucheret, X.

    1986-01-01

    A synthesis of current trends in the reduction and computation of wall effects is presented. Some of the points discussed include: (1) for the two-dimensional, transonic tests, various control techniques of boundary conditions are used with adaptive walls offering high precision in determining reference conditions and residual corrections. A reduction in the boundary layer effects of the lateral walls is obtained at T2; (2) for the three-dimensional tests, the methods for the reduction of wall effects are still seldom applied due to a lesser need and to their complexity; (3) the supports holding the model of the probes have to be taken into account in the estimation of perturbatory effects.

  6. Visual enhancement of images of natural resources: Applications in geology

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Neto, G.; Araujo, E. O.; Mascarenhas, N. D. A.; Desouza, R. C. M.

    1980-01-01

    The principal components technique for use in multispectral scanner LANDSAT data processing results in optimum dimensionality reduction. A powerful tool for MSS IMAGE enhancement, the method provides a maximum impression of terrain ruggedness; this fact makes the technique well suited for geological analysis.

  7. Nano-yttria dispersed stainless steel composites composed by the 3 dimensional fiber deposition technique

    NASA Astrophysics Data System (ADS)

    Verhiest, K.; Mullens, S.; De Wispelaere, N.; Claessens, S.; DeBremaecker, A.; Verbeken, K.

    2012-09-01

    In this study, oxide dispersion strengthened (ODS) 316L steel samples were manufactured by the 3 dimensional fiber deposition (3DFD) technique. The performance of 3DFD as colloidal consolidation technique to obtain porous green bodies based on yttria (Y2O3) nano-slurries or paste, is discussed within this experimental work. The influence of the sintering temperature and time on sample densification and grain growth was investigated in this study. Hot consolidation was performed to obtain final product quality in terms of residual porosity reduction and final dispersion homogeneity.

  8. Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products.

    PubMed

    Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C

    2018-06-01

    The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  10. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  11. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap

    NASA Astrophysics Data System (ADS)

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-01

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.

  12. Light-cone reduction vs. TsT transformations: a fluid dynamics perspective

    NASA Astrophysics Data System (ADS)

    Dutta, Suvankar; Krishna, Hare

    2018-05-01

    We compute constitutive relations for a charged (2+1) dimensional Schrödinger fluid up to first order in derivative expansion, using holographic techniques. Starting with a locally boosted, asymptotically AdS, 4 + 1 dimensional charged black brane geometry, we uplift that to ten dimensions and perform TsT transformations to obtain an effective five dimensional local black brane solution with asymptotically Schrödinger isometries. By suitably implementing the holographic techniques, we compute the constitutive relations for the effective fluid living on the boundary of this space-time and extract first order transport coefficients from these relations. Schrödinger fluid can also be obtained by reducing a charged relativistic conformal fluid over light-cone. It turns out that both the approaches result the same system at the end. Fluid obtained by light-cone reduction satisfies a restricted class of thermodynamics. Here, we see that the charged fluid obtained holographically also belongs to the same restricted class.

  13. Sentinel Lymph Node Biopsy: Quantification of Lymphedema Risk Reduction

    DTIC Science & Technology

    2006-10-01

    dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J Radiat Oncol Biol Phys...techniques based on conventional photon beams, intensity modulated photon beams and proton beams for therapy of intact breast. Radiother Oncol. Feb...Harris JR. Three-dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J

  14. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  15. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  16. Black Hole Entropy from Bondi-Metzner-Sachs Symmetry at the Horizon.

    PubMed

    Carlip, S

    2018-03-09

    Near the horizon, the obvious symmetries of a black hole spacetime-the horizon-preserving diffeomorphisms-are enhanced to a larger symmetry group with a three-dimensional Bondi-Metzner-Sachs algebra. Using dimensional reduction and covariant phase space techniques, I investigate this augmented symmetry and show that it is strong enough to determine the black hole entropy in any dimension.

  17. Chaos in plasma simulation and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, C.; Newman, D.E.; Sprott, J.C.

    1993-09-01

    We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  18. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  19. Principle Component Analysis with Incomplete Data: A simulation of R pcaMethods package in Constructing an Environmental Quality Index with Missing Data

    EPA Science Inventory

    Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...

  20. Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.

    PubMed

    Spiwok, Vojtěch; Králová, Blanka

    2011-12-14

    Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics

  1. Manifold Embedding and Semantic Segmentation for Intraoperative Guidance With Hyperspectral Brain Imaging.

    PubMed

    Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong

    2017-09-01

    Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.

  2. Drug-target interaction prediction using ensemble learning and dimensionality reduction.

    PubMed

    Ezzat, Ali; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-10-01

    Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  4. Dimensional Stabilization of Wood In Use

    Treesearch

    R. M. Rowell; R. L. Youngs

    1981-01-01

    Many techniques have been devised to reduce the tendency of wood to change dimensions in contact with moisture. Treatments such as cross-lamination, water-resistant coatings, hygroscopicity reduction, crosslinking, and bulking are reviewed and recommendations for future research are given.

  5. Comparison of Neck Screw and Conventional Fixation Techniques in Mandibular Condyle Fractures Using 3-Dimensional Finite Element Analysis.

    PubMed

    Conci, Ricardo Augusto; Tomazi, Flavio Henrique Silveira; Noritomi, Pedro Yoshito; da Silva, Jorge Vicente Lopes; Fritscher, Guilherme Genehr; Heitz, Claiton

    2015-07-01

    To compare the mechanical stress on the mandibular condyle after the reduction and fixation of mandibular condylar fractures using the neck screw and 2 other conventional techniques according to 3-dimensional finite element analysis. A 3-dimensional finite element model of a mandible was created and graphically simulated on a computer screen. The model was fixed with 3 different techniques: a 2.0-mm plate with 4 screws, 2 plates (1 1.5-mm plate and 1 2.0-mm plate) with 4 screws, and a neck screw. Loads were applied that simulated muscular action, with restrictions of the upper movements of the mandible, differentiation of the cortical and medullary bone, and the virtual "folds" of the plates and screws so that they could adjust to the condylar surface. Afterward, the data were exported for graphic visualization of the results and quantitative analysis was performed. The 2-plate technique exhibited better stability in regard to displacement of fractures, deformity of the synthesis materials, and minimum and maximum tension values. The results with the neck screw were satisfactory and were similar to those found when a miniplate was used. Although the study shows that 2 isolated plates yielded better results compared with the other groups using other fixation systems and methods, the neck screw could be an option for condylar fracture reduction. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Application of computer-aided three-dimensional skull model with rapid prototyping technique in repair of zygomatico-orbito-maxillary complex fracture.

    PubMed

    Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu

    2009-06-01

    With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.

  7. Exploring nonlinear feature space dimension reduction and data representation in breast Cadx with Laplacian eigenmaps and t-SNE.

    PubMed

    Jamieson, Andrew R; Giger, Maryellen L; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality reduction and data representation," Neural Comput. 15, 1373-1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, "Visualizing data using t-SNE," J. Mach. Learn. Res. 9, 2579-2605 (2008)]. These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier's AUC performance. In the large U.S. data set, sample high performance results include, AUC0.632+ = 0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+ = 0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+ = 0.90 with interval [0.847;0.919], all using the MCMC-BANN. Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space.

  8. Three-dimensional mapping of the lateral ventricles in autism

    PubMed Central

    Vidal, Christine N.; Nicolsonln, Rob; Boire, Jean-Yves; Barra, Vincent; DeVito, Timothy J.; Hayashi, Kiralee M.; Geaga, Jennifer A.; Drost, Dick J.; Williamson, Peter C.; Rajakumar, Nagalingam; Toga, Arthur W.; Thompson, Paul M.

    2009-01-01

    In this study, a computational mapping technique was used to examine the three-dimensional profile of the lateral ventricles in autism. T1-weighted three-dimensional magnetic resonance images of the brain were acquired from 20 males with autism (age: 10.1 ± 3.5 years) and 22 male control subjects (age: 10.7 ± 2.5 years). The lateral ventricles were delineated manually and ventricular volumes were compared between the two groups. Ventricular traces were also converted into statistical three-dimensional maps, based on anatomical surface meshes. These maps were used to visualize regional morphological differences in the thickness of the lateral ventricles between patients and controls. Although ventricular volumes measured using traditional methods did not differ significantly between groups, statistical surface maps revealed subtle, highly localized reductions in ventricular size in patients with autism in the left frontal and occipital horns. These localized reductions in the lateral ventricles may result from exaggerated brain growth early in life. PMID:18502618

  9. Biased normalized cuts for target detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.

    2016-05-01

    The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.

  10. Assessment of metal artifact reduction methods in pelvic CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdoli, Mehrsima; Mehranian, Abolfazl; Ailianou, Angeliki

    2016-04-15

    Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulatedmore » datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.« less

  11. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Intraoperative fluoroscopic evaluation of screw placement during pelvic and acetabular surgery.

    PubMed

    Yi, Chengla; Burns, Sean; Hak, David J

    2014-01-01

    The surgical treatment of pelvic and acetabular fractures can be technically challenging. Various techniques are available for the reconstruction of pelvic and acetabular fractures. Less invasive percutaneous fracture stabilization techniques, with closed reduction or limited open reduction, have been developed and are gaining popularity in the management of pelvic and acetabular fractures. These techniques require knowledge and interpretation of various fluoroscopic images to ensure appropriate and safe screw placement. Given the anatomic complexity of the intrapelvic structures and the 2-dimensional nature of standard fluoroscopy, multiple images oriented in different planes are needed to assess the accuracy of guide wire and screw placement. This article reviews the fluoroscopic imaging of common screw orientations during pelvic and acetabular surgery.

  13. Techniques for increasing the efficiency of Earth gravity calculations for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Lyubomirsky, A. S.

    1981-01-01

    Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.

  14. Near-field acoustical holography of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Wall, Alan T.; Gee, Kent L.; Neilsen, Tracianne; Krueger, David W.; Sommerfeldt, Scott D.; James, Michael M.

    2010-10-01

    Noise radiated from high-performance military jet aircraft poses a hearing-loss risk to personnel. Accurate characterization of jet noise can assist in noise prediction and noise reduction techniques. In this work, sound pressure measurements were made in the near field of an F-22 Raptor. With more than 6000 measurement points, this is the most extensive near-field measurement of a high-performance jet to date. A technique called near-field acoustical holography has been used to propagate the complex pressure from a two- dimensional plane to a three-dimensional region in the jet vicinity. Results will be shown and what they reveal about jet noise characteristics will be discussed.

  15. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    NASA Astrophysics Data System (ADS)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  16. Detection of Epistasis for Flowering Time Using Bayesian Multilocus Estimation in a Barley MAGIC Population

    PubMed Central

    Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.

    2018-01-01

    Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994

  17. Coarse-grained mechanics of viral shells

    NASA Astrophysics Data System (ADS)

    Klug, William S.; Gibbons, Melissa M.

    2008-03-01

    We present an approach for creating three-dimensional finite element models of viral capsids from atomic-level structural data (X-ray or cryo-EM). The models capture heterogeneous geometric features and are used in conjunction with three-dimensional nonlinear continuum elasticity to simulate nanoindentation experiments as performed using atomic force microscopy. The method is extremely flexible; able to capture varying levels of detail in the three-dimensional structure. Nanoindentation simulations are presented for several viruses: Hepatitis B, CCMV, HK97, and φ29. In addition to purely continuum elastic models a multiscale technique is developed that combines finite-element kinematics with MD energetics such that large-scale deformations are facilitated by a reduction in degrees of freedom. Simulations of these capsid deformation experiments provide a testing ground for the techniques, as well as insight into the strength-determining mechanisms of capsid deformation. These methods can be extended as a framework for modeling other proteins and macromolecular structures in cell biology.

  18. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  19. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  20. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE

    PubMed Central

    Jamieson, Andrew R.; Giger, Maryellen L.; Drukker, Karen; Li, Hui; Yuan, Yading; Bhooshan, Neha

    2010-01-01

    Purpose: In this preliminary study, recently developed unsupervised nonlinear dimension reduction (DR) and data representation techniques were applied to computer-extracted breast lesion feature spaces across three separate imaging modalities: Ultrasound (U.S.) with 1126 cases, dynamic contrast enhanced magnetic resonance imaging with 356 cases, and full-field digital mammography with 245 cases. Two methods for nonlinear DR were explored: Laplacian eigenmaps [M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Comput. 15, 1373–1396 (2003)] and t-distributed stochastic neighbor embedding (t-SNE) [L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res. 9, 2579–2605 (2008)]. Methods: These methods attempt to map originally high dimensional feature spaces to more human interpretable lower dimensional spaces while preserving both local and global information. The properties of these methods as applied to breast computer-aided diagnosis (CADx) were evaluated in the context of malignancy classification performance as well as in the visual inspection of the sparseness within the two-dimensional and three-dimensional mappings. Classification performance was estimated by using the reduced dimension mapped feature output as input into both linear and nonlinear classifiers: Markov chain Monte Carlo based Bayesian artificial neural network (MCMC-BANN) and linear discriminant analysis. The new techniques were compared to previously developed breast CADx methodologies, including automatic relevance determination and linear stepwise (LSW) feature selection, as well as a linear DR method based on principal component analysis. Using ROC analysis and 0.632+bootstrap validation, 95% empirical confidence intervals were computed for the each classifier’s AUC performance. Results: In the large U.S. data set, sample high performance results include, AUC0.632+=0.88 with 95% empirical bootstrap interval [0.787;0.895] for 13 ARD selected features and AUC0.632+=0.87 with interval [0.817;0.906] for four LSW selected features compared to 4D t-SNE mapping (from the original 81D feature space) giving AUC0.632+=0.90 with interval [0.847;0.919], all using the MCMC-BANN. Conclusions: Preliminary results appear to indicate capability for the new methods to match or exceed classification performance of current advanced breast lesion CADx algorithms. While not appropriate as a complete replacement of feature selection in CADx problems, DR techniques offer a complementary approach, which can aid elucidation of additional properties associated with the data. Specifically, the new techniques were shown to possess the added benefit of delivering sparse lower dimensional representations for visual interpretation, revealing intricate data structure of the feature space. PMID:20175497

  1. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  2. Use of a real-size 3D-printed model as a preoperative and intraoperative tool for minimally invasive plating of comminuted midshaft clavicle fractures.

    PubMed

    Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol

    2015-06-10

    Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.

  3. Network embedding-based representation learning for single cell RNA-seq data.

    PubMed

    Li, Xiangyu; Chen, Weizheng; Chen, Yang; Zhang, Xuegong; Gu, Jin; Zhang, Michael Q

    2017-11-02

    Single cell RNA-seq (scRNA-seq) techniques can reveal valuable insights of cell-to-cell heterogeneities. Projection of high-dimensional data into a low-dimensional subspace is a powerful strategy in general for mining such big data. However, scRNA-seq suffers from higher noise and lower coverage than traditional bulk RNA-seq, hence bringing in new computational difficulties. One major challenge is how to deal with the frequent drop-out events. The events, usually caused by the stochastic burst effect in gene transcription and the technical failure of RNA transcript capture, often render traditional dimension reduction methods work inefficiently. To overcome this problem, we have developed a novel Single Cell Representation Learning (SCRL) method based on network embedding. This method can efficiently implement data-driven non-linear projection and incorporate prior biological knowledge (such as pathway information) to learn more meaningful low-dimensional representations for both cells and genes. Benchmark results show that SCRL outperforms other dimensional reduction methods on several recent scRNA-seq datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  5. AHIMSA - Ad hoc histogram information measure sensing algorithm for feature selection in the context of histogram inspired clustering techniques

    NASA Technical Reports Server (NTRS)

    Dasarathy, B. V.

    1976-01-01

    An algorithm is proposed for dimensionality reduction in the context of clustering techniques based on histogram analysis. The approach is based on an evaluation of the hills and valleys in the unidimensional histograms along the different features and provides an economical means of assessing the significance of the features in a nonparametric unsupervised data environment. The method has relevance to remote sensing applications.

  6. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  7. Supervised linear dimensionality reduction with robust margins for object recognition

    NASA Astrophysics Data System (ADS)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  8. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    PubMed

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.

  9. Data Mining Methods for Recommender Systems

    NASA Astrophysics Data System (ADS)

    Amatriain, Xavier; Jaimes*, Alejandro; Oliver, Nuria; Pujol, Josep M.

    In this chapter, we give an overview of the main Data Mining techniques used in the context of Recommender Systems. We first describe common preprocessing methods such as sampling or dimensionality reduction. Next, we review the most important classification techniques, including Bayesian Networks and Support Vector Machines. We describe the k-means clustering algorithm and discuss several alternatives. We also present association rules and related algorithms for an efficient training process. In addition to introducing these techniques, we survey their uses in Recommender Systems and present cases where they have been successfully applied.

  10. Clinical Application of a Hybrid RapidArc Radiotherapy Technique for Locally Advanced Lung Cancer.

    PubMed

    Silva, Scott R; Surucu, Murat; Steber, Jennifer; Harkenrider, Matthew M; Choi, Mehee

    2017-04-01

    Radiation treatment planning for locally advanced lung cancer can be technically challenging, as delivery of ≥60 Gy to large volumes with concurrent chemotherapy is often associated with significant risk of normal tissue toxicity. We clinically implemented a novel hybrid RapidArc technique in patients with lung cancer and compared these plans with 3-dimensional conformal radiotherapy and RapidArc-only plans. Hybrid RapidArc was used to treat 11 patients with locally advanced lung cancer having bulky mediastinal adenopathy. All 11 patients received concurrent chemotherapy. All underwent a 4-dimensional computed tomography planning scan. Hybrid RapidArc plans concurrently combined static (60%) and RapidArc (40%) beams. All cases were replanned using 3- to 5-field 3-dimensional conformal radiotherapy and RapidArc technique as controls. Significant reductions in dose were observed in hybrid RapidArc plans compared to 3-dimensional conformal radiotherapy plans for total lung V20 and mean (-2% and -0.6 Gy); contralateral lung mean (-2.92 Gy); and esophagus V60 and mean (-16.0% and -2.2 Gy; all P < .05). Contralateral lung doses were significantly lower for hybrid RapidArc plans compared to RapidArc-only plans (all P < .05). Compared to 3-dimensional conformal radiotherapy, heart V60 and mean dose were significantly improved with hybrid RapidArc (3% vs 5%, P = .04 and 16.32 Gy vs 16.65 Gy, P = .03). However, heart V40 and V45 and maximum spinal cord dose were significantly lower with RapidArc plans compared to hybrid RapidArc plans. Conformity and homogeneity were significantly better with hybrid RapidArc plans compared to 3-dimensional conformal radiotherapy plans ( P < .05). Treatment was well tolerated, with no grade 3+ toxicities. To our knowledge, this is the first report on the clinical application of hybrid RapidArc in patients with locally advanced lung cancer. Hybrid RapidArc permitted safe delivery of 60 to 66 Gy to large lung tumors with concurrent chemotherapy and demonstrated advantages for reduction in low-dose lung volumes, esophageal dose, and mean heart dose.

  11. Generalizations of the Toda molecule

    NASA Astrophysics Data System (ADS)

    Van Velthoven, W. P. G.; Bais, F. A.

    1986-12-01

    Finite-energy monopole solutions are constructed for the self-dual equations with spherical symmetry in an arbitrary integer graded Lie algebra. The constraint of spherical symmetry in a complex noncoordinate basis leads to a dimensional reduction. The resulting two-dimensional ( r, t) equations are of second order and furnish new generalizations of the Toda molecule equations. These are then solved by a technique which is due to Leznov and Saveliev. For time-independent solutions a further reduction is made, leading to an ansatz for all SU(2) embeddings of the Lie algebra. The regularity condition at the origin for the solutions, needed to ensure finite energy, is also solved for a special class of nonmaximal embeddings. Explicit solutions are given for the groups SU(2), SO(4), Sp(4) and SU(4).

  12. Toward On-Demand Deep Brain Stimulation Using Online Parkinson's Disease Prediction Driven by Dynamic Detection.

    PubMed

    Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas

    2017-12-01

    In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.

  13. A vector scanning processing technique for pulsed laser velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1989-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.

  14. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  15. Marine geodetic control for geoidal profile mapping across the Puerto Rican Trench

    NASA Technical Reports Server (NTRS)

    Fubara, D. M.; Mourad, A. G.

    1975-01-01

    A marine geodetic control was established for the northern end of the geoidal profile mapping experiment across the Puerto Rican Trench by determining the three-dimensional geodetic coordinates of the four ocean-bottom mounted acoustic transponders. The data reduction techniques employed and analytical processes involved are described. Before applying the analytical techniques to the field data, they were tested with simulated data and proven to be effective in theory as well as in practice.

  16. Active Subspaces for Wind Plant Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N; Quick, Julian; Dykes, Katherine L

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less

  17. Classification enhancement for post-stroke dementia using fuzzy neighborhood preserving analysis with QR-decomposition.

    PubMed

    Al-Qazzaz, Noor Kamal; Ali, Sawal; Ahmad, Siti Anom; Escudero, Javier

    2017-07-01

    The aim of the present study was to discriminate the electroencephalogram (EEG) of 5 patients with vascular dementia (VaD), 15 patients with stroke-related mild cognitive impairment (MCI), and 15 control normal subjects during a working memory (WM) task. We used independent component analysis (ICA) and wavelet transform (WT) as a hybrid preprocessing approach for EEG artifact removal. Three different features were extracted from the cleaned EEG signals: spectral entropy (SpecEn), permutation entropy (PerEn) and Tsallis entropy (TsEn). Two classification schemes were applied - support vector machine (SVM) and k-nearest neighbors (kNN) - with fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) as a dimensionality reduction technique. The FNPAQR dimensionality reduction technique increased the SVM classification accuracy from 82.22% to 90.37% and from 82.6% to 86.67% for kNN. These results suggest that FNPAQR consistently improves the discrimination of VaD, MCI patients and control normal subjects and it could be a useful feature selection to help the identification of patients with VaD and MCI.

  18. Machine Learning Based Dimensionality Reduction Facilitates Ligand Diffusion Paths Assessment: A Case of Cytochrome P450cam.

    PubMed

    Rydzewski, J; Nowak, W

    2016-04-12

    In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.

  19. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Reduction technique for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1995-01-01

    A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.

  1. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  2. Thermally induced rarefied gas flow in a three-dimensional enclosure with square cross-section

    NASA Astrophysics Data System (ADS)

    Zhu, Lianhua; Yang, Xiaofan; Guo, Zhaoli

    2017-12-01

    Rarefied gas flow in a three-dimensional enclosure induced by nonuniform temperature distribution is numerically investigated. The enclosure has a square channel-like geometry with alternatively heated closed ends and lateral walls with a linear temperature distribution. A recently proposed implicit discrete velocity method with a memory reduction technique is used to numerically simulate the problem based on the nonlinear Shakhov kinetic equation. The Knudsen number dependencies of the vortices pattern, slip velocity at the planar walls and edges, and heat transfer are investigated. The influences of the temperature ratio imposed at the ends of the enclosure and the geometric aspect ratio are also evaluated. The overall flow pattern shows similarities with those observed in two-dimensional configurations in literature. However, features due to the three-dimensionality are observed with vortices that are not identified in previous studies on similar two-dimensional enclosures at high Knudsen and small aspect ratios.

  3. Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca

    2017-11-01

    Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.

  4. A solution to the Navier-Stokes equations based upon the Newton Kantorovich method

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.

    1977-01-01

    An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.

  5. Specific surface to evaluate the efficiencies of milling and pretreatment of wood for enzymatic saccharification

    Treesearch

    Junyong Zhu; G.S. Wang; X.J. Pan; Roland Gleisner

    2009-01-01

    Sieving methods have been almost exclusively used for feedstock size-reduction characterization in the biomass refining literature. This study demonstrates a methodology to properly characterize specific surface of biomass substrates through two dimensional measurement of each fiber of the substrate using a wet imaging technique. The methodology provides more...

  6. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.

  7. Partial Discharge Spectral Characterization in HF, VHF and UHF Bands Using Particle Swarm Optimization.

    PubMed

    Robles, Guillermo; Fresno, José Manuel; Martínez-Tarifa, Juan Manuel; Ardila-Rey, Jorge Alfredo; Parrado-Hernández, Emilio

    2018-03-01

    The measurement of partial discharge (PD) signals in the radio frequency (RF) range has gained popularity among utilities and specialized monitoring companies in recent years. Unfortunately, in most of the occasions the data are hidden by noise and coupled interferences that hinder their interpretation and renders them useless especially in acquisition systems in the ultra high frequency (UHF) band where the signals of interest are weak. This paper is focused on a method that uses a selective spectral signal characterization to feature each signal, type of partial discharge or interferences/noise, with the power contained in the most representative frequency bands. The technique can be considered as a dimensionality reduction problem where all the energy information contained in the frequency components is condensed in a reduced number of UHF or high frequency (HF) and very high frequency (VHF) bands. In general, dimensionality reduction methods make the interpretation of results a difficult task because the inherent physical nature of the signal is lost in the process. The proposed selective spectral characterization is a preprocessing tool that facilitates further main processing. The starting point is a clustering of signals that could form the core of a PD monitoring system. Therefore, the dimensionality reduction technique should discover the best frequency bands to enhance the affinity between signals in the same cluster and the differences between signals in different clusters. This is done maximizing the minimum Mahalanobis distance between clusters using particle swarm optimization (PSO). The tool is tested with three sets of experimental signals to demonstrate its capabilities in separating noise and PDs with low signal-to-noise ratio and separating different types of partial discharges measured in the UHF and HF/VHF bands.

  8. Aerodynamic investigations into various low speed L/D improvement devices on the 140A/B space shuttle orbiter configuration in the Rockwell International low speed wind tunnel (OA86)

    NASA Technical Reports Server (NTRS)

    Mennell, R. C.

    1974-01-01

    Tests were conducted to investigate various base drag reduction techniques in an attempt to improve Orbiter lift-to-drag ratios and to calculate sting interference effects on the Orbiter aerodynamic characteristics. Test conditions and facilites, and model dimensional data are presented along with the data reduction guidelines and data set/run number collation used for the studies. Aerodynamic force and moment data and the results of stability and control tests are also given.

  9. SPHARA--a generalized spatial Fourier analysis for multi-sensor systems with non-uniformly arranged sensors: application to EEG.

    PubMed

    Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens

    2015-01-01

    Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.

  10. Bending of solitons in weak and slowly varying inhomogeneous plasma

    NASA Astrophysics Data System (ADS)

    Mukherjee, Abhik; Janaki, M. S.; Kundu, Anjan

    2015-12-01

    The bending of solitons in two dimensional plane is presented in the presence of weak and slowly varying inhomogeneous ion density for the propagation of ion acoustic soliton in unmagnetized cold plasma with isothermal electrons. Using reductive perturbation technique, a modified Kadomtsev-Petviashvili equation is obtained with a chosen unperturbed ion density profile. The exact solution of the equation shows that the phase of the solitary wave gets modified by a function related to the unperturbed inhomogeneous ion density causing the soliton to bend in the two dimensional plane, while the amplitude of the soliton remains constant.

  11. Bending of solitons in weak and slowly varying inhomogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Abhik, E-mail: abhik.mukherjee@saha.ac.in; Janaki, M. S., E-mail: ms.janaki@saha.ac.in; Kundu, Anjan, E-mail: anjan.kundu@saha.ac.in

    2015-12-15

    The bending of solitons in two dimensional plane is presented in the presence of weak and slowly varying inhomogeneous ion density for the propagation of ion acoustic soliton in unmagnetized cold plasma with isothermal electrons. Using reductive perturbation technique, a modified Kadomtsev-Petviashvili equation is obtained with a chosen unperturbed ion density profile. The exact solution of the equation shows that the phase of the solitary wave gets modified by a function related to the unperturbed inhomogeneous ion density causing the soliton to bend in the two dimensional plane, while the amplitude of the soliton remains constant.

  12. Surface defects and chiral algebras

    NASA Astrophysics Data System (ADS)

    Córdova, Clay; Gaiotto, Davide; Shao, Shu-Heng

    2017-05-01

    We investigate superconformal surface defects in four-dimensional N=2 superconformal theories. Each such defect gives rise to a module of the associated chiral algebra and the surface defect Schur index is the character of this module. Various natural chiral algebra operations such as Drinfeld-Sokolov reduction and spectral flow can be interpreted as constructions involving four-dimensional surface defects. We compute the index of these defects in the free hypermultiplet theory and Argyres-Douglas theories, using both infrared techniques involving BPS states, as well as renormalization group flows onto Higgs branches. In each case we find perfect agreement with the predicted characters.

  13. The Emerging Role of 3-Dimensional Printing in Rhinology.

    PubMed

    Stokken, Janalee K; Pallanch, John F

    2017-06-01

    Nasal septal perforations, particularly those that are large and irregular in shape, often present as challenging surgical dilemmas. New technology has allowed us to develop techniques using computed tomography imaging and 3-dimensional (3D) printers to design custom polymeric silicone septal buttons. These buttons offer patients an option that avoids a surgical intervention when standard buttons do not fit well or are not tolerated. Preliminary data suggest that buttons designed by 3D printer technology provide more comfort than standard commercially available or hand-carved buttons with equivalent reduction of symptoms. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sang Beom; Dsilva, Carmeline J.; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu

    Understanding the mechanisms by which proteins fold from disordered amino-acid chains to spatially ordered structures remains an area of active inquiry. Molecular simulations can provide atomistic details of the folding dynamics which complement experimental findings. Conventional order parameters, such as root-mean-square deviation and radius of gyration, provide structural information but fail to capture the underlying dynamics of the protein folding process. It is therefore advantageous to adopt a method that can systematically analyze simulation data to extract relevant structural as well as dynamical information. The nonlinear dimensionality reduction technique known as diffusion maps automatically embeds the high-dimensional folding trajectories inmore » a lower-dimensional space from which one can more easily visualize folding pathways, assuming the data lie approximately on a lower-dimensional manifold. The eigenvectors that parametrize the low-dimensional space, furthermore, are determined systematically, rather than chosen heuristically, as is done with phenomenological order parameters. We demonstrate that diffusion maps can effectively characterize the folding process of a Trp-cage miniprotein. By embedding molecular dynamics simulation trajectories of Trp-cage folding in diffusion maps space, we identify two folding pathways and intermediate structures that are consistent with the previous studies, demonstrating that this technique can be employed as an effective way of analyzing and constructing protein folding pathways from molecular simulations.« less

  16. Locating landmarks on high-dimensional free energy surfaces

    PubMed Central

    Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.

    2015-01-01

    Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545

  17. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  18. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Weighted Distance Functions Improve Analysis of High-Dimensional Data: Application to Molecular Dynamics Simulations.

    PubMed

    Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas

    2015-11-10

    Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.

  20. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  1. Clustering cancer gene expression data by projective clustering ensemble

    PubMed Central

    Yu, Xianxue; Yu, Guoxian

    2017-01-01

    Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920

  2. Modeling and control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Mingori, D. L.

    1988-01-01

    This monograph presents integrated modeling and controller design methods for flexible structures. The controllers, or compensators, developed are optimal in the linear-quadratic-Gaussian sense. The performance objectives, sensor and actuator locations and external disturbances influence both the construction of the model and the design of the finite dimensional compensator. The modeling and controller design procedures are carried out in parallel to ensure compatibility of these two aspects of the design problem. Model reduction techniques are introduced to keep both the model order and the controller order as small as possible. A linear distributed, or infinite dimensional, model is the theoretical basis for most of the text, but finite dimensional models arising from both lumped-mass and finite element approximations also play an important role. A central purpose of the approach here is to approximate an optimal infinite dimensional controller with an implementable finite dimensional compensator. Both convergence theory and numerical approximation methods are given. Simple examples are used to illustrate the theory.

  3. Integrand Reduction Reloaded: Algebraic Geometry and Finite Fields

    NASA Astrophysics Data System (ADS)

    Sameshima, Ray D.; Ferroglia, Andrea; Ossola, Giovanni

    2017-01-01

    The evaluation of scattering amplitudes in quantum field theory allows us to compare the phenomenological prediction of particle theory with the measurement at collider experiments. The study of scattering amplitudes, in terms of their symmetries and analytic properties, provides a theoretical framework to develop techniques and efficient algorithms for the evaluation of physical cross sections and differential distributions. Tree-level calculations have been known for a long time. Loop amplitudes, which are needed to reduce the theoretical uncertainty, are more challenging since they involve a large number of Feynman diagrams, expressed as integrals of rational functions. At one-loop, the problem has been solved thanks to the combined effect of integrand reduction, such as the OPP method, and unitarity. However, plenty of work is still needed at higher orders, starting with the two-loop case. Recently, integrand reduction has been revisited using algebraic geometry. In this presentation, we review the salient features of integrand reduction for dimensionally regulated Feynman integrals, and describe an interesting technique for their reduction based on multivariate polynomial division. We also show a novel approach to improve its efficiency by introducing finite fields. Supported in part by the National Science Foundation under Grant PHY-1417354.

  4. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    PubMed

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  5. [New techniques in the operative treatment of calcaneal fractures].

    PubMed

    Rammelt, S; Amlang, M; Sands, A K; Swords, M

    2016-03-01

    The ideal treatment of displaced intra-articular calcaneal fractures is still controversially discussed. Because of the variable fracture patterns and the vulnerable soft tissue coverage an individual treatment concept is advisable. In order to minimize wound edge necrosis associated with extended lateral approaches, selected fractures may be treated percutaneously or in a less invasive manner while controlling joint reduction via a sinus tarsi approach. Fixation in these cases is achieved with screws, intramedullary locking nails or modified plates that are slid in subcutaneously. A thorough knowledge of the three dimensional calcaneal anatomy and open reduction maneuvers is a prerequisite for good results with less invasive techniques. Early functional follow-up treatment aims at early rehabilitation independent of the kind of fixation. Peripheral fractures of the talus and calcaneus frequently result from subluxation and dislocation at the subtalar and Chopart joints. They are still regularly overlooked and result in painful arthritis if left untreated. If an exact anatomical reduction of these intra-articular fractures is impossible, resection of small fragments is indicated.

  6. Numerical study of shock-induced combustion in methane-air mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Rabinowitz, Martin J.

    1993-01-01

    The shock-induced combustion of methane-air mixtures in hypersonic flows is investigated using a new reaction mechanism consisting of 19 reacting species and 52 elementary reactions. This reduced model is derived from a full kinetic mechanism via the Detailed Reduction technique. Zero-dimensional computations of several shock-tube experiments are presented first. The reaction mechanism is then combined with a fully implicit Navier-Stokes computational fluid dynamics (CFD) code to conduct numerical simulations of two-dimensional and axisymmetric shock-induced combustion experiments of stoichiometric methane-air mixtures at a Mach number of M = 6.61. Applications to the ram accelerator concept are also presented.

  7. Coarse analysis of collective behaviors: Bifurcation analysis of the optimal velocity model for traffic jam formation

    NASA Astrophysics Data System (ADS)

    Miura, Yasunari; Sugiyama, Yuki

    2017-12-01

    We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.

  8. Surface defects and chiral algebras

    DOE PAGES

    Córdova, Clay; Gaiotto, Davide; Shao, Shu-Heng

    2017-05-26

    Here, we investigate superconformal surface defects in four-dimensional N = 2 superconformal theories. Each such defect gives rise to a module of the associated chiral algebra and the surface defect Schur index is the character of this module. Various natural chiral algebra operations such as Drinfield-Sokolov reduction and spectral flow can be interpreted as constructions involving four-dimensional surface defects. We compute the index of these defects in the free hypermultiplet theory and Argyres-Douglas theories, using both infrared techniques involving BPS states, as well as renormalization group flows onto Higgs branches. We find perfect agreement with the predicted characters, in eachmore » case.« less

  9. Surface defects and chiral algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Córdova, Clay; Gaiotto, Davide; Shao, Shu-Heng

    Here, we investigate superconformal surface defects in four-dimensional N = 2 superconformal theories. Each such defect gives rise to a module of the associated chiral algebra and the surface defect Schur index is the character of this module. Various natural chiral algebra operations such as Drinfield-Sokolov reduction and spectral flow can be interpreted as constructions involving four-dimensional surface defects. We compute the index of these defects in the free hypermultiplet theory and Argyres-Douglas theories, using both infrared techniques involving BPS states, as well as renormalization group flows onto Higgs branches. We find perfect agreement with the predicted characters, in eachmore » case.« less

  10. SPHARA - A Generalized Spatial Fourier Analysis for Multi-Sensor Systems with Non-Uniformly Arranged Sensors: Application to EEG

    PubMed Central

    Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens

    2015-01-01

    Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290

  11. Evaluation of several two-dimensional gel electrophoresis techniques in cardiac proteomics.

    PubMed

    Li, Zhao Bo; Flint, Paul W; Boluyt, Marvin O

    2005-09-01

    Two-dimensional gel electrophoresis (2-DE) is currently the best method for separating complex mixtures of proteins, and its use is gradually becoming more common in cardiac proteome analysis. A number of variations in basic 2-DE have emerged, but their usefulness in analyzing cardiac tissue has not been evaluated. The purpose of the present study was to systematically evaluate the capabilities and limitations of several 2-DE techniques for separating proteins from rat heart tissue. Immobilized pH gradient strips of various pH ranges, parameters of protein loading and staining, subcellular fractionation, and detection of phosphorylated proteins were studied. The results provide guidance for proteome analysis of cardiac and other tissues in terms of selection of the isoelectric point separating window for cardiac proteins, accurate quantitation of cardiac protein abundance, stabilization of technical variation, reduction of sample complexity, enrichment of low-abundant proteins, and detection of phosphorylated proteins.

  12. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  13. High dimensional feature reduction via projection pursuit

    NASA Technical Reports Server (NTRS)

    Jimenez, Luis; Landgrebe, David

    1994-01-01

    The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.

  14. Analysis of Information Content in High-Spectral Resolution Sounders using Subset Selection Analysis

    NASA Technical Reports Server (NTRS)

    Velez-Reyes, Miguel; Joiner, Joanna

    1998-01-01

    In this paper, we summarize the results of the sensitivity analysis and data reduction carried out to determine the information content of AIRS and IASI channels. The analysis and data reduction was based on the use of subset selection techniques developed in the linear algebra and statistical community to study linear dependencies in high dimensional data sets. We applied the subset selection method to study dependency among channels by studying the dependency among their weighting functions. Also, we applied the technique to study the information provided by the different levels in which the atmosphere is discretized for retrievals and analysis. Results from the method correlate well with intuition in many respects and point out to possible modifications for band selection in sensor design and number and location of levels in the analysis process.

  15. Comparison of cryoablation with 3D mapping versus conventional mapping for the treatment of atrioventricular re-entrant tachycardia and right-sided paraseptal accessory pathways.

    PubMed

    Russo, Mario S; Drago, Fabrizio; Silvetti, Massimo S; Righi, Daniela; Di Mambro, Corrado; Placidi, Silvia; Prosperi, Monica; Ciani, Michele; Naso Onofrio, Maria T; Cannatà, Vittorio

    2016-06-01

    Aim Transcatheter cryoablation is a well-established technique for the treatment of atrioventricular nodal re-entry tachycardia and atrioventricular re-entry tachycardia in children. Fluoroscopy or three-dimensional mapping systems can be used to perform the ablation procedure. The aim of this study was to compare the success rate of cryoablation procedures for the treatment of right septal accessory pathways and atrioventricular nodal re-entry circuits in children using conventional or three-dimensional mapping and to evaluate whether three-dimensional mapping was associated with reduced patient radiation dose compared with traditional mapping. In 2013, 81 children underwent transcatheter cryoablation at our institution, using conventional mapping in 41 children - 32 atrioventricular nodal re-entry tachycardia and nine atrioventricular re-entry tachycardia - and three-dimensional mapping in 40 children - 24 atrioventricular nodal re-entry tachycardia and 16 atrioventricular re-entry tachycardia. Using conventional mapping, the overall success rate was 78.1 and 66.7% in patients with atrioventricular nodal re-entry tachycardia or atrioventricular re-entry tachycardia, respectively. Using three-dimensional mapping, the overall success rate was 91.6 and 75%, respectively (p=ns). The use of three-dimensional mapping was associated with a reduction in cumulative air kerma and cumulative air kerma-area product of 76.4 and 67.3%, respectively (p<0.05). The use of three-dimensional mapping compared with the conventional fluoroscopy-guided method for cryoablation of right septal accessory pathways and atrioventricular nodal re-entry circuits in children was associated with a significant reduction in patient radiation dose without an increase in success rate.

  16. Seismic Data Analysis throught Multi-Class Classification.

    NASA Astrophysics Data System (ADS)

    Anderson, P.; Kappedal, R. D.; Magana-Zook, S. A.

    2017-12-01

    In this research, we conducted twenty experiments of varying time and frequency bands on 5000seismic signals with the intent of finding a method to classify signals as either an explosion or anearthquake in an automated fashion. We used a multi-class approach by clustering of the data throughvarious techniques. Dimensional reduction was examined through the use of wavelet transforms withthe use of the coiflet mother wavelet and various coefficients to explore possible computational time vsaccuracy dependencies. Three and four classes were generated from the clustering techniques andexamined with the three class approach producing the most accurate and realistic results.

  17. Wire EDM for Refractory Materials

    NASA Technical Reports Server (NTRS)

    Zellars, G. R.; Harris, F. E.; Lowell, C. E.; Pollman, W. M.; Rys, V. J.; Wills, R. J.

    1982-01-01

    In an attempt to reduce fabrication time and costs, Wire Electrical Discharge Machine (Wire EDM) method was investigated as tool for fabricating matched blade roots and disk slots. Eight high-strength nickel-base superalloys were used. Computer-controlled Wire EDM technique provided high quality surfaces with excellent dimensional tolerances. Wire EDM method offers potential for substantial reductions in fabrication costs for "hard to machine" alloys and electrically conductive materials in specific high-precision applications.

  18. Marginal semi-supervised sub-manifold projections with informative constraints for dimensionality reduction and recognition.

    PubMed

    Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S

    2012-12-01

    In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  20. [Application of three-dimensional printing personalized acetabular wing-plate in treatment of complex acetabular fractures via lateral-rectus approach].

    PubMed

    Mai, J G; Gu, C; Lin, X Z; Li, T; Huang, W Q; Wang, H; Tan, X Y; Lin, H; Wang, Y M; Yang, Y Q; Jin, D D; Fan, S C

    2017-03-01

    Objective: To investigate reduction and fixation of complex acetabular fractures using three-dimensional (3D) printing technique and personalized acetabular wing-plate via lateral-rectus approach. Methods: From March to July 2016, 8 patients with complex acetabular fractures were surgically managed through 3D printing personalized acetabular wing-plate via lateral-rectus approach at Department of Orthopedics, the Third Affiliated Hospital of Southern Medical University. There were 4 male patients and 4 female patients, with an average age of 57 years (ranging from 31 to 76 years). According to Letournel-Judet classification, there were 2 anterior+ posterior hemitransverse fractures and 6 both-column fractures, without posterior wall fracture or contralateral pelvic fracture. The CT data files of acetabular fracture were imported into the computer and 3D printing technique was used to print the fractures models after reduction by digital orthopedic technique. The acetabular wing-plate was designed and printed with titanium. All fractures were treated via the lateral-rectus approach in a horizontal position after general anesthesia. The anterior column and the quadrilateral surface fractures were fixed by 3D printing personalized acetabular wing-plate, and the posterior column fractures were reduction and fixed by antegrade lag screws under direct vision. Results: All the 8 cases underwent the operation successfully. Postoperative X-ray and CT examination showed excellent or good reduction of anterior and posterior column, without any operation complications. Only 1 case with 75 years old was found screw loosening in the pubic bone with osteoporosis after 1 month's follow-up, who didn't accept any treatment because the patient didn't feel discomfort. According to the Matta radiological evaluation, the reduction of the acetabular fracture was rated as excellent in 3 cases, good in 4 cases and fair in 1 case. All patients were followed up for 3 to 6 months and all patients had achieved bone union. According to the modified Merle D'Aubigné and Postel scoring system, 5 cases were excellent, 2 cases were good, 1 case was fair. Conclusions: Surgical management of complex acetabular fracture via lateral-rectus approach combine with 3D printing personalized acetabular wing-plate can effectively improve reduction quality and fixation effect. It will be truly accurate, personalized and minimally invasive.

  1. Hypergraph Based Feature Selection Technique for Medical Diagnosis.

    PubMed

    Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar

    2016-11-01

    The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.

  2. Assessing clutter reduction in parallel coordinates using image processing techniques

    NASA Astrophysics Data System (ADS)

    Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham

    2018-01-01

    Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.

  3. Numerical study of anomalous dynamic scaling behaviour of (1+1)-dimensional Das Sarma-Tamborenea model

    NASA Astrophysics Data System (ADS)

    Xun, Zhi-Peng; Tang, Gang; Han, Kui; Hao, Da-Peng; Xia, Hui; Zhou, Wei; Yang, Xi-Quan; Wen, Rong-Ji; Chen, Yu-Ling

    2010-07-01

    In order to discuss the finite-size effect and the anomalous dynamic scaling behaviour of Das Sarma-Tamborenea growth model, the (1+1)-dimensional Das Sarma-Tamborenea model is simulated on a large length scale by using the kinetic Monte-Carlo method. In the simulation, noise reduction technique is used in order to eliminate the crossover effect. Our results show that due to the existence of the finite-size effect, the effective global roughness exponent of the (1+1)-dimensional Das Sarma-Tamborenea model systematically decreases with system size L increasing when L > 256. This finding proves the conjecture by Aarao Reis[Aarao Reis F D A 2004 Phys. Rev. E 70 031607]. In addition, our simulation results also show that the Das Sarma-Tamborenea model in 1+1 dimensions indeed exhibits intrinsic anomalous scaling behaviour.

  4. Spillover, nonlinearity, and flexible structures

    NASA Technical Reports Server (NTRS)

    Bass, Robert W.; Zes, Dean

    1991-01-01

    Many systems whose evolution in time is governed by Partial Differential Equations (PDEs) are linearized around a known equilibrium before Computer Aided Control Engineering (CACE) is considered. In this case, there are infinitely many independent vibrational modes, and it is intuitively evident on physical grounds that infinitely many actuators would be needed in order to control all modes. A more precise, general formulation of this grave difficulty (spillover problem) is due to A.V. Balakrishnan. A possible route to circumvention of this difficulty lies in leaving the PDE in its original nonlinear form, and adding the essentially finite dimensional control action prior to linearization. One possibly applicable technique is the Liapunov Schmidt rigorous reduction of singular infinite dimensional implicit function problems to finite dimensional implicit function problems. Omitting details of Banach space rigor, the formalities of this approach are given.

  5. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  7. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  8. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  9. Evaluation of Orthopedic Metal Artifact Reduction Application in Three-Dimensional Computed Tomography Reconstruction of Spinal Instrumentation: A Single Saudi Center Experience.

    PubMed

    Ali, Amir Monir

    2018-01-01

    The aim of the study was to evaluate the commercially available orthopedic metal artifact reduction (OMAR) technique in postoperative three-dimensional computed tomography (3DCT) reconstruction studies after spinal instrumentation and to investigate its clinical application. One hundred and twenty (120) patients with spinal metallic implants were included in the study. All had 3DCT reconstruction examinations using the OMAR software after obtaining the informed consents and approval of the Institution Ethical Committee. The degree of the artifacts, the related muscular density, the clearness of intermuscular fat planes, and definition of the adjacent vertebrae were qualitatively evaluated. The diagnostic satisfaction and quality of the 3D reconstruction images were thoroughly assessed. The majority (96.7%) of 3DCT reconstruction images performed were considered satisfactory to excellent for diagnosis. Only 3.3% of the reconstructed images had rendered unacceptable diagnostic quality. OMAR can effectively reduce metallic artifacts in patients with spinal instrumentation with highly diagnostic 3DCT reconstruction images.

  10. A Simple and Computationally Efficient Sampling Approach to Covariate Adjustment for Multifactor Dimensionality Reduction Analysis of Epistasis

    PubMed Central

    Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.

    2010-01-01

    Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193

  11. A boundary element approach to optimization of active noise control sources on three-dimensional structures

    NASA Technical Reports Server (NTRS)

    Cunefare, K. A.; Koopmann, G. H.

    1991-01-01

    This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.

  12. Reduction by symmetries in singular quantum-mechanical problems: General scheme and application to Aharonov-Bohm model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smirnov, A. G., E-mail: smirnov@lpi.ru

    2015-12-15

    We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less

  13. Network community-based model reduction for vortical flows

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko

    2018-06-01

    A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.

  14. Orbital selective directional conductor in the two-orbital Hubbard model

    DOE PAGES

    Mukherjee, Anamitra; Patel, Niravkumar D.; Moreo, Adriana; ...

    2016-02-29

    Recently, we employed a developed many-body technique that allows for the incorporation of thermal effects, the rich phase diagram of a two-dimensional two-orbital (degenerate d xz and d yz) Hubbard model is presented varying temperature and the repulsion U. The main result is the finding at intermediate U of an antiferromagnetic orbital selective state where an effective dimensional reduction renders one direction insulating and the other metallic. Possible realizations of this state are discussed. Additionally, we also study nematicity above the N eel temperature. After a careful finite-size scaling analysis, the nematicity temperature window appears to survive in the bulkmore » limit, although it is very narrow.« less

  15. Synthesis of three-dimensionally ordered macro-/mesoporous Pt with high electrocatalytic activity by a dual-templating approach

    NASA Astrophysics Data System (ADS)

    Zhang, Chengwei; Yang, Hui; Sun, Tingting; Shan, Nannan; Chen, Jianfeng; Xu, Lianbin; Yan, Yushan

    2014-01-01

    Three dimensionally ordered macro-/mesoporous (3DOM/m) Pt catalysts are fabricated by chemical reduction employing a dual-templating synthesis approach combining both colloidal crystal (opal) templating (hard-templating) and lyotropic liquid crystal templating (soft-templating) techniques. The macropore walls of the prepared 3DOM/m Pt exhibit a uniform mesoporous structure composed of polycrystalline Pt nanoparticles. Both the size of the mesopores and Pt nanocrystallites are in the range of 3-5 nm. The 3DOM/m Pt catalyst shows a larger electrochemically active surface area (ECSA), and higher catalytic activity as well as better poisoning tolerance for methanol oxidation reaction (MOR) than the commercial Pt black catalyst.

  16. Content Abstract Classification Using Naive Bayes

    NASA Astrophysics Data System (ADS)

    Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin

    2018-03-01

    This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.

  17. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  18. Marginal hepatectomy in the rat: from anatomy to surgery.

    PubMed

    Madrahimov, Nodir; Dirsch, Olaf; Broelsch, Christoph; Dahmen, Uta

    2006-07-01

    Based on the 3-dimensional visualization of vascular supply and drainage, a vessel-oriented resection technique was optimized. The new surgical technique was used to determine the maximal reduction in liver mass enabling a 50% 1-week survival rate. Determination of the minimal liver mass is necessary in clinical as well as in experimental liver surgery. In rats, survival seems to depend on the surgical technique applied. Extended hepatectomy with removal of 90% of the liver mass was long regarded as a lethal model. Introduction of a vessel-oriented approach enabled long-term survival in this model. The lobar and vascular anatomy of rat livers was visualized by plastination of the whole organ, respectively, by corrosion casts of the portal vein, hepatic artery and liver veins. The three-dimensional models were used to extract the underlying anatomic structure. In 90% partial hepatectomy, the liver parenchyma was clamped close to the base of the respective liver lobes (left lateral, median and right, liver lobe). Piercing sutures were placed through the liver parenchyma, so that the stem of portal vein and the accompanying hepatic artery but also the hepatic vein were included. A 1-week survival rate of 100% was achieved after 90% hepatectomy. Extending the procedure to 95% resection by additional removal of the upper caudate lobe led to a 1-week survival rate of 66%; 97% partial hepatectomy, accomplished by additional resection of the lower caudate lobe only leaving the paracaval parts of the liver behind, resulted in 100% lethality within 4 days. Using a anatomically based, vessel-oriented, parenchyma-preserving surgical technique in 95% liver resections led to long-term survival. This represents the maximal reduction of liver mass compatible with survival.

  19. Predict subcellular locations of singleplex and multiplex proteins by semi-supervised learning and dimension-reducing general mode of Chou's PseAAC.

    PubMed

    Pacharawongsakda, Eakasit; Theeramunkong, Thanaruk

    2013-12-01

    Predicting protein subcellular location is one of major challenges in Bioinformatics area since such knowledge helps us understand protein functions and enables us to select the targeted proteins during drug discovery process. While many computational techniques have been proposed to improve predictive performance for protein subcellular location, they have several shortcomings. In this work, we propose a method to solve three main issues in such techniques; i) manipulation of multiplex proteins which may exist or move between multiple cellular compartments, ii) handling of high dimensionality in input and output spaces and iii) requirement of sufficient labeled data for model training. Towards these issues, this work presents a new computational method for predicting proteins which have either single or multiple locations. The proposed technique, namely iFLAST-CORE, incorporates the dimensionality reduction in the feature and label spaces with co-training paradigm for semi-supervised multi-label classification. For this purpose, the Singular Value Decomposition (SVD) is applied to transform the high-dimensional feature space and label space into the lower-dimensional spaces. After that, due to limitation of labeled data, the co-training regression makes use of unlabeled data by predicting the target values in the lower-dimensional spaces of unlabeled data. In the last step, the component of SVD is used to project labels in the lower-dimensional space back to those in the original space and an adaptive threshold is used to map a numeric value to a binary value for label determination. A set of experiments on viral proteins and gram-negative bacterial proteins evidence that our proposed method improve the classification performance in terms of various evaluation metrics such as Aiming (or Precision), Coverage (or Recall) and macro F-measure, compared to the traditional method that uses only labeled data.

  20. The concept and evolution of involved site radiation therapy for lymphoma.

    PubMed

    Specht, Lena; Yahalom, Joachim

    2015-10-01

    We describe the development of radiation therapy for lymphoma from extended field radiotherapy of the past to modern conformal treatment with involved site radiation therapy based on advanced imaging, three-dimensional treatment planning and advanced treatment delivery techniques. Today, radiation therapy is part of the multimodality treatment of lymphoma, and the irradiated tissue volume is much smaller than before, leading to highly significant reductions in the risks of long-term complications.

  1. Electrochemical reduction of hexahydro-1,3,5-trinitro-1,3,5-triazine in aqueous solutions.

    PubMed

    Bonin, Pascale M L; Bejan, Dorin; Schutt, Leah; Hawari, Jalal; Bunce, Nigel J

    2004-03-01

    Electrochemical reduction of RDX, hexahydro-1,3,5-trinitro-1,3,5-triazine, a commercial and military explosive, was examined as a possible remediation technology for treating RDX-contaminated groundwater. A cascade of divided flow-through cells was used, with reticulated vitreous carbon cathodes and IrO2/Ti dimensionally stable anodes, initially using acetonitrile/water solutions to increase the solubility of RDX. The major degradation pathway involved reduction of RDX to the corresponding mononitroso compound, followed by ring cleavage to yield formaldehyde and methylenedinitramine. The reaction intermediates underwent further reduction and/or hydrolysis, the net result being the complete transformation of RDX to small molecules. The rate of degradation increased with current density, but the current efficiency was highest at low current densities. The technique was extended successfully both to 100% aqueous solutions of RDX and to an undivided electrochemical cell.

  2. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  3. Effects of septal pacing on P wave characteristics: the value of three-dimensional echocardiography.

    PubMed

    Szili-Torok, Tamas; Bruining, Nico; Scholten, Marcoen; Kimman, Geert-Jan; Roelandt, Jos; Jordaens, Luc

    2003-01-01

    Interatrial septum (IAS) pacing has been proposed for the prevention of paroxysmal atrial fibrillation. IAS pacing is usually guided by fluoroscopy and P wave analysis. The authors have developed a new approach for IAS pacing using intracardiac echocardiography (ICE), and examined its effects on P wave characteristics. Cross-sectional images are acquired during pullback of the ICE transducer from the superior vena cava into the inferior vena cava by an electrocardiogram- and respiration-gated technique. The right atrium and IAS are then three-dimensionally reconstructed, and the desired pacing site is selected. After lead placement and electrical testing, another three-dimensional reconstruction is performed to verify the final lead position. The study included 14 patients. IAS pacing was achieved at seven suprafossal (SF) and seven infrafossal (IF) lead locations, all confirmed by three-dimensional imaging. IAS pacing resulted in a significant reduction of P wave duration as compared to sinus rhythm (99.7 +/- 18.7 vs 140.4 +/- 8.8 ms; P < 0.01). SF pacing was associated with a greater reduction of P wave duration than IF pacing (56.1 +/- 9.9 vs 30.2 +/- 13.6 ms; P < 0.01). P wave dispersion remained unchanged during septal pacing as compared to sinus rhythm (21.4 +/- 16.1 vs 13.5 +/- 13.9 ms; NS). Three-dimensional intracardiac echocardiography can be used to guide IAS pacing. SF pacing was associated with a greater decrease in P wave duration, suggesting that it is a preferable location to decrease interatrial conduction delay.

  4. Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennubhotla, Chakra; Castro, Jason

    2013-01-01

    In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain un- clear. Here, we use non-negative matrix factorization (NMF) - a dimensionality reduction technique - to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor di- mensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner.more » We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures.« less

  5. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  6. A Corresponding Lie Algebra of a Reductive homogeneous Group and Its Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Feng; Wu, Li-Xin; Rui, Wen-Juan

    2015-05-01

    With the help of a Lie algebra of a reductive homogeneous space G/K, where G is a Lie group and K is a resulting isotropy group, we introduce a Lax pair for which an expanding (2+1)-dimensional integrable hierarchy is obtained by applying the binormial-residue representation (BRR) method, whose Hamiltonian structure is derived from the trace identity for deducing (2+1)-dimensional integrable hierarchies, which was proposed by Tu, et al. We further consider some reductions of the expanding integrable hierarchy obtained in the paper. The first reduction is just right the (2+1)-dimensional AKNS hierarchy, the second-type reduction reveals an integrable coupling of the (2+1)-dimensional AKNS equation (also called the Davey-Stewartson hierarchy), a kind of (2+1)-dimensional Schrödinger equation, which was once reobtained by Tu, Feng and Zhang. It is interesting that a new (2+1)-dimensional integrable nonlinear coupled equation is generated from the reduction of the part of the (2+1)-dimensional integrable coupling, which is further reduced to the standard (2+1)-dimensional diffusion equation along with a parameter. In addition, the well-known (1+1)-dimensional AKNS hierarchy, the (1+1)-dimensional nonlinear Schrödinger equation are all special cases of the (2+1)-dimensional expanding integrable hierarchy. Finally, we discuss a few discrete difference equations of the diffusion equation whose stabilities are analyzed by making use of the von Neumann condition and the Fourier method. Some numerical solutions of a special stationary initial value problem of the (2+1)-dimensional diffusion equation are obtained and the resulting convergence and estimation formula are investigated. Supported by the Innovation Team of Jiangsu Province hosted by China University of Mining and Technology (2014), the National Natural Science Foundation of China under Grant No. 11371361, the Fundamental Research Funds for the Central Universities (2013XK03), and the Natural Science Foundation of Shandong Province under Grant No. ZR2013AL016

  7. Particle displacement tracking applied to air flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.

  8. Integrative sparse principal component analysis of gene expression data.

    PubMed

    Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge

    2017-12-01

    In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.

  9. Correlation Between Residual Displacement and Osteonecrosis of the Femoral Head Following Cannulated Screw Fixation of Femoral Neck Fractures.

    PubMed

    Wang, Chen; Xu, Gui-Jun; Han, Zhe; Jiang, Xuan; Zhang, Cheng-Bao; Dong, Qiang; Ma, Jian-Xiong; Ma, Xin-Long

    2015-11-01

    The aim of the study was to introduce a new method for measuring the residual displacement of the femoral head after internal fixation and explore the relationship between residual displacement and osteonecrosis with femoral head, and to evaluate the risk factors associated with osteonecrosis of the femoral head in patients with femoral neck fractures treated by closed reduction and percutaneous cannulated screw fixation.One hundred and fifty patients who sustained intracapsular femoral neck fractures between January 2011 and April 2013 were enrolled in the study. All were treated with closed reduction and percutaneous cannulated screw internal fixation. The residual displacement of the femoral head after surgery was measured by 3-dimensional reconstruction that evaluated the quality of the reduction. Other data that might affect prognosis were also obtained from outpatient follow-up, telephone calls, or case reviews. Multivariate logistic regression analysis was applied to assess the intrinsic relationship between the risk factors and the osteonecrosis of the femoral head.Osteonecrosis of the femoral head occurred in 27 patients (18%). Significant differences were observed regarding the residual displacement of the femoral head and the preoperative Garden classification. Moreover, we found more or less residual displacement of femoral head in all patients with high quality of reduction based on x-ray by the new technique. There was a close relationship between residual displacement and ONFH.There exists limitation to evaluate the quality of reduction by x-ray. Three-dimensional reconstruction and digital measurement, as a new method, is a more accurate method to assess the quality of reduction. Residual displacement of the femoral head and the preoperative Garden classification were risk factors for osteonecrosis of the femoral head. High-quality reduction was necessary to avoid complications.

  10. On the generation of cnoidal waves in ion beam-dusty plasma containing superthermal electrons and ions

    NASA Astrophysics Data System (ADS)

    El-Bedwehy, N. A.

    2016-07-01

    The reductive perturbation technique is used for investigating an ion beam-dusty plasma system consisting of two opposite polarity dusty grains, and superthermal electrons and ions in addition to ion beam. A two-dimensional Kadomtsev-Petviashvili equation is derived. The solution of this equation, employing Painlevé analysis, leads to cnoidal waves. The dependence of the structural features of these waves on the physical plasma parameters is investigated.

  11. On the generation of cnoidal waves in ion beam-dusty plasma containing superthermal electrons and ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El-Bedwehy, N. A., E-mail: nab-elbedwehy@yahoo.com

    2016-07-15

    The reductive perturbation technique is used for investigating an ion beam-dusty plasma system consisting of two opposite polarity dusty grains, and superthermal electrons and ions in addition to ion beam. A two-dimensional Kadomtsev–Petviashvili equation is derived. The solution of this equation, employing Painlevé analysis, leads to cnoidal waves. The dependence of the structural features of these waves on the physical plasma parameters is investigated.

  12. A Dimensionality Reduction Technique for Enhancing Information Context.

    DTIC Science & Technology

    1980-06-01

    table, memory requirements for the difference arrays are based on the FORTRAN G programming languaee as implementated on an IBM 360/67. Single...the greatest amount of insight. All studies were performed on an IBM 360/67. Transformation 53 numerical results were produced as well as two...the origin to (19,19,19,19,19,19,19,19,19,l9). Two classes were generated in each case. The samples were synthetically derived using the IBM 360/57 and

  13. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  14. redNumerical modelling of a peripheral arterial stenosis using dimensionally reduced models and kernel methods.

    PubMed

    Köppl, Tobias; Santin, Gabriele; Haasdonk, Bernard; Helmig, Rainer

    2018-05-06

    In this work, we consider two kinds of model reduction techniques to simulate blood flow through the largest systemic arteries, where a stenosis is located in a peripheral artery i.e. in an artery that is located far away from the heart. For our simulations we place the stenosis in one of the tibial arteries belonging to the right lower leg (right post tibial artery). The model reduction techniques that are used are on the one hand dimensionally reduced models (1-D and 0-D models, the so-called mixed-dimension model) and on the other hand surrogate models produced by kernel methods. Both methods are combined in such a way that the mixed-dimension models yield training data for the surrogate model, where the surrogate model is parametrised by the degree of narrowing of the peripheral stenosis. By means of a well-trained surrogate model, we show that simulation data can be reproduced with a satisfactory accuracy and that parameter optimisation or state estimation problems can be solved in a very efficient way. Furthermore it is demonstrated that a surrogate model enables us to present after a very short simulation time the impact of a varying degree of stenosis on blood flow, obtaining a speedup of several orders over the full model. This article is protected by copyright. All rights reserved.

  15. Analytic integration of real-virtual counterterms in NNLO jet cross sections I

    NASA Astrophysics Data System (ADS)

    Aglietti, Ugo; Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Trócsányi, Zoltán

    2008-09-01

    We present analytic evaluations of some integrals needed to give explicitly the integrated real-virtual counterterms, based on a recently proposed subtraction scheme for next-to-next-to-leading order (NNLO) jet cross sections. After an algebraic reduction of the integrals, integration-by-parts identities are used for the reduction to master integrals and for the computation of the master integrals themselves by means of differential equations. The results are written in terms of one- and two-dimensional harmonic polylogarithms, once an extension of the standard basis is made. We expect that the techniques described here will be useful in computing other integrals emerging in calculations in perturbative quantum field theories.

  16. Method of thermal strain hysteresis reduction in metal matrix composites

    NASA Technical Reports Server (NTRS)

    Dries, Gregory A. (Inventor); Tompkins, Stephen S. (Inventor)

    1987-01-01

    A method is disclosed for treating graphite reinforced metal matrix composites so as to eliminate thermal strain hysteresis and impart dimensional stability through a large thermal cycle. The method is applied to the composite post fabrication and is effective on metal matrix materials using graphite fibers manufactured by both the hot roll bonding and diffusion bonding techniques. The method consists of first heat treating the material in a solution anneal oven followed by a water quench and then subjecting the material to a cryogenic treatment in a cryogenic oven. This heat treatment and cryogenic stress reflief is effective in imparting a dimensional stability and reduced thermal strain hysteresis in the material over a -250.degree. F. to +250.degree. F. thermal cycle.

  17. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  18. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  19. Metric dimensional reduction at singularities with implications to Quantum Gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoica, Ovidiu Cristinel, E-mail: holotronix@gmail.com

    2014-08-15

    A series of old and recent theoretical observations suggests that the quantization of gravity would be feasible, and some problems of Quantum Field Theory would go away if, somehow, the spacetime would undergo a dimensional reduction at high energy scales. But an identification of the deep mechanism causing this dimensional reduction would still be desirable. The main contribution of this article is to show that dimensional reduction effects are due to General Relativity at singularities, and do not need to be postulated ad-hoc. Recent advances in understanding the geometry of singularities do not require modification of General Relativity, being justmore » non-singular extensions of its mathematics to the limit cases. They turn out to work fine for some known types of cosmological singularities (black holes and FLRW Big-Bang), allowing a choice of the fundamental geometric invariants and physical quantities which remain regular. The resulting equations are equivalent to the standard ones outside the singularities. One consequence of this mathematical approach to the singularities in General Relativity is a special, (geo)metric type of dimensional reduction: at singularities, the metric tensor becomes degenerate in certain spacetime directions, and some properties of the fields become independent of those directions. Effectively, it is like one or more dimensions of spacetime just vanish at singularities. This suggests that it is worth exploring the possibility that the geometry of singularities leads naturally to the spontaneous dimensional reduction needed by Quantum Gravity. - Highlights: • The singularities we introduce are described by finite geometric/physical objects. • Our singularities are accompanied by dimensional reduction effects. • They affect the metric, the measure, the topology, the gravitational DOF (Weyl = 0). • Effects proposed in other approaches to Quantum Gravity are obtained naturally. • The geometric dimensional reduction obtained opens new ways for Quantum Gravity.« less

  20. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  1. Bladder radiotherapy treatment: A retrospective comparison of 3-dimensional conformal radiotherapy, intensity-modulated radiation therapy, and volumetric-modulated arc therapy plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasciuti, Katia, E-mail: k.pasciuti@virgilio.it; Kuthpady, Shrinivas; Anderson, Anne

    To examine tumor's and organ's response when different radiotherapy plan techniques are used. Ten patients with confirmed bladder tumors were first treated using 3-dimensional conformal radiotherapy (3DCRT) and subsequently the original plans were re-optimized using the intensity-modulated radiation treatment (IMRT) and volumetric-modulated arc therapy (VMAT)-techniques. Targets coverage in terms of conformity and homogeneity index, TCP, and organs' dose limits, including integral dose analysis were evaluated. In addition, MUs and treatment delivery times were compared. Better minimum target coverage (1.3%) was observed in VMAT plans when compared to 3DCRT and IMRT ones confirmed by a statistically significant conformity index (CI) results.more » Large differences were observed among techniques in integral dose results of the femoral heads. Even if no statistically significant differences were reported in rectum and tissue, a large amount of energy deposition was observed in 3DCRT plans. In any case, VMAT plans provided better organs and tissue sparing confirmed also by the normal tissue complication probability (NTCP) analysis as well as a better tumor control probability (TCP) result. Our analysis showed better overall results in planning using VMAT techniques. Furthermore, a total time reduction in treatment observed among techniques including gantry and collimator rotation could encourage using the more recent one, reducing target movements and patient discomfort.« less

  2. A detailed view on Model-Based Multifactor Dimensionality Reduction for detecting gene-gene interactions in case-control data in the absence and presence of noise

    PubMed Central

    CATTAERT, TOM; CALLE, M. LUZ; DUDEK, SCOTT M.; MAHACHIE JOHN, JESTINAH M.; VAN LISHOUT, FRANÇOIS; URREA, VICTOR; RITCHIE, MARYLYN D.; VAN STEEN, KRISTEL

    2010-01-01

    SUMMARY Analyzing the combined effects of genes and/or environmental factors on the development of complex diseases is a great challenge from both the statistical and computational perspective, even using a relatively small number of genetic and non-genetic exposures. Several data mining methods have been proposed for interaction analysis, among them, the Multifactor Dimensionality Reduction Method (MDR), which has proven its utility in a variety of theoretical and practical settings. Model-Based Multifactor Dimensionality Reduction (MB-MDR), a relatively new MDR-based technique that is able to unify the best of both non-parametric and parametric worlds, was developed to address some of the remaining concerns that go along with an MDR-analysis. These include the restriction to univariate, dichotomous traits, the absence of flexible ways to adjust for lower-order effects and important confounders, and the difficulty to highlight epistasis effects when too many multi-locus genotype cells are pooled into two new genotype groups. Whereas the true value of MB-MDR can only reveal itself by extensive applications of the method in a variety of real-life scenarios, here we investigate the empirical power of MB-MDR to detect gene-gene interactions in the absence of any noise and in the presence of genotyping error, missing data, phenocopy, and genetic heterogeneity. For the considered simulation settings, we show that the power is generally higher for MB-MDR than for MDR, in particular in the presence of genetic heterogeneity, phenocopy, or low minor allele frequencies. PMID:21158747

  3. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  4. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  5. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  6. Extended symmetry analysis of generalized Burgers equations

    NASA Astrophysics Data System (ADS)

    Pocheketa, Oleksandr A.; Popovych, Roman O.

    2017-10-01

    Using enhanced classification techniques, we carry out the extended symmetry analysis of the class of generalized Burgers equations of the form ut + uux + f(t, x)uxx = 0. This enhances all the previous results on symmetries of these equations and includes the description of admissible transformations, Lie symmetries, Lie and nonclassical reductions, hidden symmetries, conservation laws, potential admissible transformations, and potential symmetries. The study is based on the fact that the class is normalized, and its equivalence group is finite-dimensional.

  7. Femtosecond timing measurement and control using ultrafast organic thin films

    NASA Astrophysics Data System (ADS)

    Naruse, Makoto; Mitsu, Hiroyuki; Furuki, Makoto; Iwasa, Izumi; Sato, Yasuhiro; Tatsuura, Satoshi; Tian, Minquan

    2003-12-01

    We show a femtosecond timing measurement and control technique using a squarylium dye J-aggregate film, which is an organic thin film that acts as an ultrafast two-dimensional optical switch. Optical pulse timing is directly mapped to space-domain position on the film, and the large area and ultrafast response offer a femtosecond-resolved, large dynamic range, real-time, multichannel timing measurement capability. A timing fluctuation (jitter, wander, and skew) reduction architecture is presented and experimentally demonstrated.

  8. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  9. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  10. Non-uniformly weighted sampling for faster localized two-dimensional correlated spectroscopy of the brain in vivo

    NASA Astrophysics Data System (ADS)

    Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish

    2017-04-01

    Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.

  11. High-accuracy user identification using EEG biometrics.

    PubMed

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  12. Some comments on particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L. M.

    1988-01-01

    Laser speckle velocimetry (LSV) or particle image displacement velocimetry, is introduced. This technique provides the simultaneous visualization of the two-dimensional streamline pattern in unsteady flows as well as the quantification of the velocity field over an entire plane. The advantage of this technique is that the velocity field can be measured over an entire plane of the flow field simultaneously, with accuracy and spatial resolution. From this the instantaneous vorticity field can be easily obtained. This constitutes a great asset for the study of a variety of flows that evolve stochastically in both space and time. The basic concept of LSV; methods of data acquisition and reduction, examples of its use, and parameters that affect its utilization are described.

  13. Efficient numerical simulation of an electrothermal de-icer pad

    NASA Technical Reports Server (NTRS)

    Roelke, R. J.; Keith, T. G., Jr.; De Witt, K. J.; Wright, W. B.

    1987-01-01

    In this paper, a new approach to calculate the transient thermal behavior of an iced electrothermal de-icer pad was developed. The method of splines was used to obtain the temperature distribution within the layered pad. Splines were used in order to create a tridiagonal system of equations that could be directly solved by Gauss elimination. The Stefan problem was solved using the enthalpy method along with a recent implicit technique. Only one to three iterations were needed to locate the melt front during any time step. Computational times were shown to be greatly reduced over those of an existing one dimensional procedure without any reduction in accuracy; the curent technique was more than 10 times faster.

  14. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models

    PubMed Central

    Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam

    2016-01-01

    Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936

  15. Application of optical correlation techniques to particle imaging velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1988-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of velocity vectors across an extended 2-dimensional region of the flow field. The application of optical correlation techniques to the analysis of multiple exposure laser light sheet photographs can reduce and/or simplify the data reduction time and hardware. Here, Matched Spatial Filters (MSF) are used in a pattern recognition system. Usually MSFs are used to identify the assembly line parts. In this application, the MSFs are used to identify the iso-velocity vector contours in the flow. The patterns to be recognized are the recorded particle images in a pulsed laser light sheet photograph. Measurement of the direction of the partical image displacements between exposures yields the velocity vector. The particle image exposure sequence is designed such that the velocity vector direction is determined unambiguously. A global analysis technique is used in comparison to the more common particle tracking algorithms and Young's fringe analysis technique.

  16. Markerless gating for lung cancer radiotherapy based on machine learning techniques

    NASA Astrophysics Data System (ADS)

    Lin, Tong; Li, Ruijiang; Tang, Xiaoli; Dy, Jennifer G.; Jiang, Steve B.

    2009-03-01

    In lung cancer radiotherapy, radiation to a mobile target can be delivered by respiratory gating, for which we need to know whether the target is inside or outside a predefined gating window at any time point during the treatment. This can be achieved by tracking one or more fiducial markers implanted inside or near the target, either fluoroscopically or electromagnetically. However, the clinical implementation of marker tracking is limited for lung cancer radiotherapy mainly due to the risk of pneumothorax. Therefore, gating without implanted fiducial markers is a promising clinical direction. We have developed several template-matching methods for fluoroscopic marker-less gating. Recently, we have modeled the gating problem as a binary pattern classification problem, in which principal component analysis (PCA) and support vector machine (SVM) are combined to perform the classification task. Following the same framework, we investigated different combinations of dimensionality reduction techniques (PCA and four nonlinear manifold learning methods) and two machine learning classification methods (artificial neural networks—ANN and SVM). Performance was evaluated on ten fluoroscopic image sequences of nine lung cancer patients. We found that among all combinations of dimensionality reduction techniques and classification methods, PCA combined with either ANN or SVM achieved a better performance than the other nonlinear manifold learning methods. ANN when combined with PCA achieves a better performance than SVM in terms of classification accuracy and recall rate, although the target coverage is similar for the two classification methods. Furthermore, the running time for both ANN and SVM with PCA is within tolerance for real-time applications. Overall, ANN combined with PCA is a better candidate than other combinations we investigated in this work for real-time gated radiotherapy.

  17. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Lie symmetry analysis and reduction for exact solution of (2+1)-dimensional Bogoyavlensky-Konopelchenko equation by geometric approach

    NASA Astrophysics Data System (ADS)

    Ray, S. Saha

    2018-04-01

    In this paper, the symmetry analysis and similarity reduction of the (2+1)-dimensional Bogoyavlensky-Konopelchenko (B-K) equation are investigated by means of the geometric approach of an invariance group, which is equivalent to the classical Lie symmetry method. Using the extended Harrison and Estabrook’s differential forms approach, the infinitesimal generators for (2+1)-dimensional B-K equation are obtained. Firstly, the vector field associated with the Lie group of transformation is derived. Then the symmetry reduction and the corresponding explicit exact solution of (2+1)-dimensional B-K equation is obtained.

  19. Model reduction of the numerical analysis of Low Impact Developments techniques

    NASA Astrophysics Data System (ADS)

    Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia

    2017-04-01

    Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.

  20. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  1. The treatment of an osteochondral shearing fracture-dislocation of the head of the proximal phalanx: a case report.

    PubMed

    Harness, Neil; Jupiter, Jesse B

    2004-09-01

    We report the morphology and treatment of a proximal interphalangeal joint dislocation resulting in an injury to the articular surface of the proximal phalanx and avulsion of the radial collateral ligament from its proximal origin. A large osteochondral fragment was sheared from the radial articular surface of the proximal phalanx and remained displaced volarly after reduction of the joint. Plain radiographs and 2- and 3-dimensional computed tomography images were used to evaluate this unusual injury before surgery. Open reduction and internal fixation using a small K-wire and figure-of-eight wire technique restored the articular surface of the head of the proximal phalanx and gave a satisfactory functional result.

  2. Locally linear embedding: dimension reduction of massive protostellar spectra

    NASA Astrophysics Data System (ADS)

    Ward, J. L.; Lumsden, S. L.

    2016-09-01

    We present the results of the application of locally linear embedding (LLE) to reduce the dimensionality of dereddened and continuum subtracted near-infrared spectra using a combination of models and real spectra of massive protostars selected from the Red MSX Source survey data base. A brief comparison is also made with two other dimension reduction techniques; principal component analysis (PCA) and Isomap using the same set of spectra as well as a more advanced form of LLE, Hessian locally linear embedding. We find that whilst LLE certainly has its limitations, it significantly outperforms both PCA and Isomap in classification of spectra based on the presence/absence of emission lines and provides a valuable tool for classification and analysis of large spectral data sets.

  3. Data on Support Vector Machines (SVM) model to forecast photovoltaic power.

    PubMed

    Malvoni, M; De Giorgi, M G; Congedo, P M

    2016-12-01

    The data concern the photovoltaic (PV) power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled "Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data" (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015) [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA) are applied to the Least Squares Support Vector Machines (LS-SVM) to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  4. Fabrication of cross-shaped Cu-nanowire resistive memory devices using a rapid, scalable, and designable inorganic-nanowire-digital-alignment technique (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Wentao; Lee, Yeongjun; Min, Sung-Yong; Park, Cheolmin; Lee, Tae-Woo

    2016-09-01

    Resistive random-access memory (RRAM) is a candidate next generation nonvolatile memory due to its high access speed, high density and ease of fabrication. Especially, cross-point-access allows cross-bar arrays that lead to high-density cells in a two-dimensional planar structure. Use of such designs could be compatible with the aggressive scaling down of memory devices, but existing methods such as optical or e-beam lithographic approaches are too complicated. One-dimensional inorganic nanowires (i-NWs) are regarded as ideal components of nanoelectronics to circumvent the limitations of conventional lithographic approaches. However, post-growth alignment of these i-NWs precisely on a large area with individual control is still a difficult challenge. Here, we report a simple, inexpensive, and rapid method to fabricate two-dimensional arrays of perpendicularly-aligned, individually-conductive Cu-NWs with a nanometer-scale CuxO layer sandwiched at each cross point, by using an inorganic-nanowire-digital-alignment technique (INDAT) and a one-step reduction process. In this approach, the oxide layer is self-formed and patterned, so conventional deposition and lithography are not necessary. INDAT eliminates the difficulties of alignment and scalable fabrication that are encountered when using currently-available techniques that use inorganic nanowires. This simple process facilitates fabrication of cross-point nonvolatile memristor arrays. Fabricated arrays had reproducible resistive switching behavior, high on/off current ratio (Ion/Ioff) 10 6 and extensive cycling endurance. This is the first report of memristors with the resistive switching oxide layer self-formed, self-patterned and self-positioned; we envision that the new features of the technique will provide great opportunities for future nano-electronic circuits.

  5. Four-dimensional \\mathcal{N} = 2 supersymmetric theory with boundary as a two-dimensional complex Toda theory

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Tan, Meng-Chwan; Vasko, Petr; Zhao, Qin

    2017-05-01

    We perform a series of dimensional reductions of the 6d, \\mathcal{N} = (2, 0) SCFT on S 2 × Σ × I × S 1 down to 2d on Σ. The reductions are performed in three steps: (i) a reduction on S 1 (accompanied by a topological twist along Σ) leading to a supersymmetric Yang-Mills theory on S 2 × Σ × I, (ii) a further reduction on S 2 resulting in a complex Chern-Simons theory defined on Σ × I, with the real part of the complex Chern-Simons level being zero, and the imaginary part being proportional to the ratio of the radii of S 2 and S 1, and (iii) a final reduction to the boundary modes of complex Chern-Simons theory with the Nahm pole boundary condition at both ends of the interval I, which gives rise to a complex Toda CFT on the Riemann surface Σ. As the reduction of the 6d theory on Σ would give rise to an \\mathcal{N} = 2 supersymmetric theory on S 2 × I × S 1, our results imply a 4d-2d duality between four-dimensional \\mathcal{N} = 2 supersymmetric theory with boundary and two-dimensional complex Toda theory.

  6. Meta-modelling, visualization and emulation of multi-dimensional data for virtual production intelligence

    NASA Astrophysics Data System (ADS)

    Schulz, Wolfgang; Hermanns, Torsten; Al Khawli, Toufik

    2017-07-01

    Decision making for competitive production in high-wage countries is a daily challenge where rational and irrational methods are used. The design of decision making processes is an intriguing, discipline spanning science. However, there are gaps in understanding the impact of the known mathematical and procedural methods on the usage of rational choice theory. Following Benjamin Franklin's rule for decision making formulated in London 1772, he called "Prudential Algebra" with the meaning of prudential reasons, one of the major ingredients of Meta-Modelling can be identified finally leading to one algebraic value labelling the results (criteria settings) of alternative decisions (parameter settings). This work describes the advances in Meta-Modelling techniques applied to multi-dimensional and multi-criterial optimization by identifying the persistence level of the corresponding Morse-Smale Complex. Implementations for laser cutting and laser drilling are presented, including the generation of fast and frugal Meta-Models with controlled error based on mathematical model reduction Reduced Models are derived to avoid any unnecessary complexity. Both, model reduction and analysis of multi-dimensional parameter space are used to enable interactive communication between Discovery Finders and Invention Makers. Emulators and visualizations of a metamodel are introduced as components of Virtual Production Intelligence making applicable the methods of Scientific Design Thinking and getting the developer as well as the operator more skilled.

  7. Visual analysis of mass cytometry data by hierarchical stochastic neighbour embedding reveals rare cell types.

    PubMed

    van Unen, Vincent; Höllt, Thomas; Pezzotti, Nicola; Li, Na; Reinders, Marcel J T; Eisemann, Elmar; Koning, Frits; Vilanova, Anna; Lelieveldt, Boudewijn P F

    2017-11-23

    Mass cytometry allows high-resolution dissection of the cellular composition of the immune system. However, the high-dimensionality, large size, and non-linear structure of the data poses considerable challenges for the data analysis. In particular, dimensionality reduction-based techniques like t-SNE offer single-cell resolution but are limited in the number of cells that can be analyzed. Here we introduce Hierarchical Stochastic Neighbor Embedding (HSNE) for the analysis of mass cytometry data sets. HSNE constructs a hierarchy of non-linear similarities that can be interactively explored with a stepwise increase in detail up to the single-cell level. We apply HSNE to a study on gastrointestinal disorders and three other available mass cytometry data sets. We find that HSNE efficiently replicates previous observations and identifies rare cell populations that were previously missed due to downsampling. Thus, HSNE removes the scalability limit of conventional t-SNE analysis, a feature that makes it highly suitable for the analysis of massive high-dimensional data sets.

  8. The use of kernel local Fisher discriminant analysis for the channelization of the Hotelling model observer

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.

    2015-03-01

    It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.

  9. A strategy for analysis of (molecular) equilibrium simulations: Configuration space density estimation, clustering, and visualization

    NASA Astrophysics Data System (ADS)

    Hamprecht, Fred A.; Peter, Christine; Daura, Xavier; Thiel, Walter; van Gunsteren, Wilfred F.

    2001-02-01

    We propose an approach for summarizing the output of long simulations of complex systems, affording a rapid overview and interpretation. First, multidimensional scaling techniques are used in conjunction with dimension reduction methods to obtain a low-dimensional representation of the configuration space explored by the system. A nonparametric estimate of the density of states in this subspace is then obtained using kernel methods. The free energy surface is calculated from that density, and the configurations produced in the simulation are then clustered according to the topography of that surface, such that all configurations belonging to one local free energy minimum form one class. This topographical cluster analysis is performed using basin spanning trees which we introduce as subgraphs of Delaunay triangulations. Free energy surfaces obtained in dimensions lower than four can be visualized directly using iso-contours and -surfaces. Basin spanning trees also afford a glimpse of higher-dimensional topographies. The procedure is illustrated using molecular dynamics simulations on the reversible folding of peptide analoga. Finally, we emphasize the intimate relation of density estimation techniques to modern enhanced sampling algorithms.

  10. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  11. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  12. On the reduction of 4d $$ \\mathcal{N}=1 $$ theories on $$ {\\mathbb{S}}^2 $$

    DOE PAGES

    Gadde, Abhijit; Razamat, Shlomo S.; Willett, Brian

    2015-11-24

    Here, we discuss reductions of generalmore » $$ \\mathcal{N}=1 $$ four dimensional gauge theories on $$ {\\mathbb{S}}^2 $$. The effective two dimensional theory one obtains depends on the details of the coupling of the theory to background fields, which can be translated to a choice of R-symmetry. We argue that, for special choices of R-symmetry, the resulting two dimensional theory has a natural interpretation as an $$ \\mathcal{N}(0,2) $$ gauge theory. As an application of our general observations, we discuss reductions of $$ \\mathcal{N}=1 $$ and $$ \\mathcal{N}=2 $$ dualities and argue that they imply certain two dimensional dualities.« less

  13. The use of 3D-printed titanium mesh tray in treating complex comminuted mandibular fractures

    PubMed Central

    Ma, Junli; Ma, Limin; Wang, Zhifa; Zhu, Xiongjie; Wang, Weijian

    2017-01-01

    Abstract Rationale: Precise bony reduction and reconstruction of optimal contour in treating comminuted mandibular fractures is very difficult using traditional techniques and devices. The aim of this report is to introduce our experiences in using virtual surgery and three-dimensional (3D) printing technique in treating this clinical challenge. Patient concerns: A 26-year-old man presented with severe trauma in the maxillofacial area due to fall from height. Diagnosis: Computed tomography images revealed middle face fractures and comminuted mandibular fracture including bilateral condyles. Interventions and outcomes: The computed tomography data was used to construct the 3D cranio-maxillofacial models; then the displaced bone fragments were virtually reduced. On the basis of the finalized model, a customized titanium mesh tray was designed and fabricated using selective laser melting technology. During the surgery, a submandibular approach was adopted to repair the mandibular fracture. The reduction and fixation were performed according to preoperative plan, the bone defects in the mental area were reconstructed with iliac bone graft. The 3D-printed mesh tray served as an intraoperative template and carrier of bone graft. The healing process was uneventful, and the patient was satisfied with the mandible contour. Lessons: Virtual surgical planning combined with 3D printing technology enables surgeon to visualize the reduction process preoperatively and guide intraoperative reduction, making the reduction less time consuming and more precise. 3D-printed titanium mesh tray can provide more satisfactory esthetic outcomes in treating complex comminuted mandibular fractures. PMID:28682875

  14. Continuous statistical modelling for rapid detection of adulteration of extra virgin olive oil using mid infrared and Raman spectroscopic data.

    PubMed

    Georgouli, Konstantia; Martinez Del Rincon, Jesus; Koidis, Anastasios

    2017-02-15

    The main objective of this work was to develop a novel dimensionality reduction technique as a part of an integrated pattern recognition solution capable of identifying adulterants such as hazelnut oil in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. A novel Continuous Locality Preserving Projections (CLPP) technique is proposed which allows the modelling of the continuous nature of the produced in-house admixtures as data series instead of discrete points. The maintenance of the continuous structure of the data manifold enables the better visualisation of this examined classification problem and facilitates the more accurate utilisation of the manifold for detecting the adulterants. The performance of the proposed technique is validated with two different spectroscopic techniques (Raman and Fourier transform infrared, FT-IR). In all cases studied, CLPP accompanied by k-Nearest Neighbors (kNN) algorithm was found to outperform any other state-of-the-art pattern recognition techniques. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Use of machine-learning classifiers to predict requests for preoperative acute pain service consultation.

    PubMed

    Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra

    2012-10-01

      The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation.   Retrospective cohort.   University teaching hospital.   The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed.   Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy.   Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance.   Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.

  16. A process for quantifying aesthetic and functional breast surgery: I. Quantifying optimal nipple position and vertical and horizontal skin excess for mastopexy and breast reduction.

    PubMed

    Tebbetts, John B

    2013-07-01

    This article defines a comprehensive process using quantified parameters for objective decision making, operative planning, technique selection, and outcomes analysis in mastopexy and breast reduction, and defines quantified parameters for nipple position and vertical and horizontal skin excess. Future submissions will detail application of the processes for skin envelope design and address composite, three-dimensional parenchyma modification options. Breast base width was used to define a proportional, desired nipple-to-inframammary fold distance for optimal aesthetics. Vertical and horizontal skin excess were measured, documented, and used for technique selection and skin envelope design in mastopexy and breast reduction. This method was applied in 124 consecutive mastopexy and 122 consecutive breast reduction cases. Average follow-up was 4.6 years (range, 6 to 14 years). No changes were made to the basic algorithm of the defined process during the study period. No patient required nipple repositioning. Complications included excessive lower pole restretch (4 percent), periareolar scar hypertrophy (0.8 percent), hematoma (1.2 percent), and areola shape irregularities (1.6 percent). Delayed healing at the junction of vertical and horizontal scars occurred in two of 124 reduction patients (1.6 percent), neither of whom required revision. The overall reoperation rate was 6.5 percent (16 of 246). This study defines the first steps of a comprehensive process for using objectively defined parameters that surgeons can apply to skin envelope design for mastopexy and breast reduction. The method can be used in conjunction with, or in lieu of, other described methods to determine nipple position.

  17. A review on the multivariate statistical methods for dimensional reduction studies

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei

    2017-05-01

    In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.

  18. Tensor Train Neighborhood Preserving Embedding

    NASA Astrophysics Data System (ADS)

    Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin

    2018-05-01

    In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.

  19. Kaluza-Klein cosmology from five-dimensional Lovelock-Cartan theory

    NASA Astrophysics Data System (ADS)

    Castillo-Felisola, Oscar; Corral, Cristóbal; del Pino, Simón; Ramírez, Francisca

    2016-12-01

    We study the Kaluza-Klein dimensional reduction of the Lovelock-Cartan theory in five-dimensional spacetime, with a compact dimension of S1 topology. We find cosmological solutions of the Friedmann-Robertson-Walker class in the reduced spacetime. The torsion and the fields arising from the dimensional reduction induce a nonvanishing energy-momentum tensor in four dimensions. We find solutions describing expanding, contracting, and bouncing universes. The model shows a dynamical compactification of the extra dimension in some regions of the parameter space.

  20. Green Synthesis of Three-Dimensional Hybrid N-Doped ORR Electro-Catalysts Derived from Apricot Sap

    PubMed Central

    Karunagaran, Ramesh; Coghlan, Campbell; Gulati, Karan; Tung, Tran Thanh; Doonan, Christian

    2018-01-01

    Rapid depletion of fossil fuel and increased energy demand has initiated a need for an alternative energy source to cater for the growing energy demand. Fuel cells are an enabling technology for the conversion of sustainable energy carriers (e.g., renewable hydrogen or bio-gas) into electrical power and heat. However, the hazardous raw materials and complicated experimental procedures used to produce electro-catalysts for the oxygen reduction reaction (ORR) in fuel cells has been a concern for the effective implementation of these catalysts. Therefore, environmentally friendly and low-cost oxygen reduction electro-catalysts synthesised from natural products are considered as an attractive alternative to currently used synthetic materials involving hazardous chemicals and waste. Herein, we describe a unique integrated oxygen reduction three-dimensional composite catalyst containing both nitrogen-doped carbon fibers (N-CF) and carbon microspheres (N-CMS) synthesised from apricot sap from an apricot tree. The synthesis was carried out via three-step process, including apricot sap resin preparation, hydrothermal treatment, and pyrolysis with a nitrogen precursor. The nitrogen-doped electro-catalysts synthesised were characterised by SEM, TEM, XRD, Raman, and BET techniques followed by electro-chemical testing for ORR catalysis activity. The obtained catalyst material shows high catalytic activity for ORR in the basic medium by facilitating the reaction via a four-electron transfer mechanism. PMID:29382103

  1. Green Synthesis of Three-Dimensional Hybrid N-Doped ORR Electro-Catalysts Derived from Apricot Sap.

    PubMed

    Karunagaran, Ramesh; Coghlan, Campbell; Shearer, Cameron; Tran, Diana; Gulati, Karan; Tung, Tran Thanh; Doonan, Christian; Losic, Dusan

    2018-01-28

    Rapid depletion of fossil fuel and increased energy demand has initiated a need for an alternative energy source to cater for the growing energy demand. Fuel cells are an enabling technology for the conversion of sustainable energy carriers (e.g., renewable hydrogen or bio-gas) into electrical power and heat. However, the hazardous raw materials and complicated experimental procedures used to produce electro-catalysts for the oxygen reduction reaction (ORR) in fuel cells has been a concern for the effective implementation of these catalysts. Therefore, environmentally friendly and low-cost oxygen reduction electro-catalysts synthesised from natural products are considered as an attractive alternative to currently used synthetic materials involving hazardous chemicals and waste. Herein, we describe a unique integrated oxygen reduction three-dimensional composite catalyst containing both nitrogen-doped carbon fibers (N-CF) and carbon microspheres (N-CMS) synthesised from apricot sap from an apricot tree. The synthesis was carried out via three-step process, including apricot sap resin preparation, hydrothermal treatment, and pyrolysis with a nitrogen precursor. The nitrogen-doped electro-catalysts synthesised were characterised by SEM, TEM, XRD, Raman, and BET techniques followed by electro-chemical testing for ORR catalysis activity. The obtained catalyst material shows high catalytic activity for ORR in the basic medium by facilitating the reaction via a four-electron transfer mechanism.

  2. An Ultrasonographic Periodontal Probe

    NASA Astrophysics Data System (ADS)

    Bertoncini, C. A.; Hinders, M. K.

    2010-02-01

    Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.

  3. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  4. Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    H, Vathsala; Koolagudi, Shashidhar G.

    2017-10-01

    This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).

  5. The 1974 NASA-ASEE summer faculty fellowship aeronautics and space research program

    NASA Technical Reports Server (NTRS)

    Obrien, J. F., Jr.; Jones, C. O.; Barfield, B. F.

    1974-01-01

    Research activities by participants in the fellowship program are documented, and include such topics as: (1) multispectral imagery for detecting southern pine beetle infestations; (2) trajectory optimization techniques for low thrust vehicles; (3) concentration characteristics of a fresnel solar strip reflection concentrator; (4) calaboration and reduction of video camera data; (5) fracture mechanics of Cer-Vit glass-ceramic; (6) space shuttle external propellant tank prelaunch heat transfer; (7) holographic interferometric fringes; and (8) atmospheric wind and stress profiles in a two-dimensional internal boundary layer.

  6. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  7. DIS off glueballs from string theory: the role of the chiral anomaly and the Chern-Simons term

    NASA Astrophysics Data System (ADS)

    Kovensky, Nicolas; Michalski, Gustavo; Schvellinger, Martin

    2018-04-01

    We calculate the structure function F 3( x, q 2) of the hadronic tensor of deep inelastic scattering (DIS) of charged leptons from glueballs of N=4 SYM theory at strong coupling and at small values of the Bjorken parameter in the gauge/string theory duality framework. This is done in terms of type IIB superstring theory scattering amplitudes. From the AdS5 perspective, the relevant part of the scattering amplitude comes from the five-dimensional non-Abelian Chern-Simons terms in the SU(4) gauged supergravity obtained from dimensional reduction on S 5. From type IIB superstring theory we derive an effective Lagrangian describing the four-point interaction in the local approximation. The exponentially small regime of the Bjorken parameter is investigated using Pomeron techniques.

  8. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  9. User's manual for three-dimensional analysis of propeller flow fields

    NASA Technical Reports Server (NTRS)

    Chaussee, D. S.; Kutler, P.

    1983-01-01

    A detailed operating manual is presented for the prop-fan computer code (in addition to supporting programs) recently developed by Kutler, Chaussee, Sorenson, and Pulliam while at the NASA'S Ames Research Center. This code solves the inviscid Euler equations using an implicit numerical procedure developed by Beam and Warming of Ames. A description of the underlying theory, numerical techniques, and boundary conditions with equations, formulas, and methods for the mesh generation program (MGP), three dimensional prop-fan flow field program (3DPFP), and data reduction program (DRP) is provided, together with complete operating instructions. In addition, a programmer's manual is also provided to assist the user interested in modifying the codes. Included in the programmer's manual for each program is a description of the input and output variables, flow charts, program listings, sample input and output data, and operating hints.

  10. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

    PubMed Central

    2012-01-01

    Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103

  11. Marginal Fisher analysis and its variants for human gait recognition and content- based image retrieval.

    PubMed

    Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang

    2007-11-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.

  12. High-speed three-dimensional measurements with a fringe projection-based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther

    2014-11-01

    An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.

  13. Machine learning techniques applied to the determination of road suitability for the transportation of dangerous substances.

    PubMed

    Matías, J M; Taboada, J; Ordóñez, C; Nieto, P G

    2007-08-17

    This article describes a methodology to model the degree of remedial action required to make short stretches of a roadway suitable for dangerous goods transport (DGT), particularly pollutant substances, using different variables associated with the characteristics of each segment. Thirty-one factors determining the impact of an accident on a particular stretch of road were identified and subdivided into two major groups: accident probability factors and accident severity factors. Given the number of factors determining the state of a particular road segment, the only viable statistical methods for implementing the model were machine learning techniques, such as multilayer perceptron networks (MLPs), classification trees (CARTs) and support vector machines (SVMs). The results produced by these techniques on a test sample were more favourable than those produced by traditional discriminant analysis, irrespective of whether dimensionality reduction techniques were applied. The best results were obtained using SVMs specifically adapted to ordinal data. This technique takes advantage of the ordinal information contained in the data without penalising the computational load. Furthermore, the technique permits the estimation of the utility function that is latent in expert knowledge.

  14. A tomographic technique for aerodynamics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Lee, G.

    1985-01-01

    Computer aided tomography (CAT) provides a means of noninvasively measuring the air density distribution around an aerodynamic model. This technique is global in that a large portion of the flow field can be measured. A test of the applicability of CAT to transonic velocities was studied. A hemispherical-nose cylinder afterbody model was tested at a Mach number of 0.8 with a new laser holographic interferometer at the 2- by 2-Foot Transonic Wind Tunnel. Holograms of the flow field were taken and were reconstructed into interferograms. The fringe distribution (a measure of the local densities) was digitized for subsequent data reduction. A computer program based on the Fourier-transform technique was developed to convert the fringe distribution into three-dimensional densities around the model. Theoretical aerodynamic densities were calculated for evaluating and assessing the accuracy of the data obtained from the tomographic method.

  15. Chaotic oscillator containing memcapacitor and meminductor and its dimensionality reduction analysis.

    PubMed

    Yuan, Fang; Wang, Guangyi; Wang, Xiaowei

    2017-03-01

    In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.

  16. GIXSGUI : a MATLAB toolbox for grazing-incidence X-ray scattering data visualization and reduction, and indexing of buried three-dimensional periodic nanostructured films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang

    GIXSGUIis a MATLAB toolbox that offers both a graphical user interface and script-based access to visualize and process grazing-incidence X-ray scattering data from nanostructures on surfaces and in thin films. It provides routine surface scattering data reduction methods such as geometric correction, one-dimensional intensity linecut, two-dimensional intensity reshapingetc. Three-dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.

  17. Inspiratory and expiratory computed tomographic volumetry for lung volume reduction surgery.

    PubMed

    Morimura, Yuki; Chen, Fengshi; Sonobe, Makoto; Date, Hiroshi

    2013-06-01

    Three-dimensional (3D) computed tomographic (CT) volumetry has been introduced into the field of thoracic surgery, and a combination of inspiratory and expiratory 3D-CT volumetry provides useful data on regional pulmonary function as well as the volume of individual lung lobes. We report herein a case of a 62-year-old man with severe emphysema who had undergone lung volume reduction surgery (LVRS) to assess this technique as a tool for the evaluation of regional lung function and volume before and after LVRS. His postoperative pulmonary function was maintained in good condition despite a gradual slight decrease 2 years after LVRS. This trend was also confirmed by a combination of inspiratory and expiratory 3D-CT volumetry. We confirm that a combination of inspiratory and expiratory 3D-CT volumetry might be effective for the preoperative assessment of LVRS in order to determine the amount of lung tissue to be resected as well as for postoperative evaluation. This novel technique could, therefore, be used more widely to assess local lung function.

  18. Inspiratory and expiratory computed tomographic volumetry for lung volume reduction surgery

    PubMed Central

    Morimura, Yuki; Chen, Fengshi; Sonobe, Makoto; Date, Hiroshi

    2013-01-01

    Three-dimensional (3D) computed tomographic (CT) volumetry has been introduced into the field of thoracic surgery, and a combination of inspiratory and expiratory 3D-CT volumetry provides useful data on regional pulmonary function as well as the volume of individual lung lobes. We report herein a case of a 62-year-old man with severe emphysema who had undergone lung volume reduction surgery (LVRS) to assess this technique as a tool for the evaluation of regional lung function and volume before and after LVRS. His postoperative pulmonary function was maintained in good condition despite a gradual slight decrease 2 years after LVRS. This trend was also confirmed by a combination of inspiratory and expiratory 3D-CT volumetry. We confirm that a combination of inspiratory and expiratory 3D-CT volumetry might be effective for the preoperative assessment of LVRS in order to determine the amount of lung tissue to be resected as well as for postoperative evaluation. This novel technique could, therefore, be used more widely to assess local lung function. PMID:23460599

  19. Direct writing of bio-functional coatings for cardiovascular applications.

    PubMed

    Perkins, Jessica; Hong, Yi; Ye, Sang-Ho; Wagner, William R; Desai, Salil

    2014-12-01

    The surface modification of metallic biomaterials is of critical importance to enhance the biocompatibility of surgical implant materials and devices. This article investigates the use of a direct-write inkjet technique for multilayer coatings of a biodegradable polymer (polyester urethane urea (PEUU)) embedded with an anti-proliferation drug paclitaxel (Taxol). The direct-write inkjet technique provides selective patterning capability for depositing multimaterial coatings on three-dimensional implant devices such as pins, screws, and stents for orthopedic and vascular applications. Drug release profiles were studied to observe the influence of drug loading and coating thickness for obtaining tunable release kinetics. Platelet deposition studies were conducted following ovine blood contact and significant reduction in platelet deposition was observed on the Taxol loaded PEUU substrate compared with the unloaded control. Rat smooth muscle cells were used for cell proliferation studies. Significant reduction in cell growth was observed following the release of anti-proliferative drug from the biopolymer thin film. This research provides a basis for developing anti-proliferative biocompatible coatings for different biomedical device applications. © 2014 Wiley Periodicals, Inc.

  20. Interatrial septum pacing guided by three-dimensional intracardiac echocardiography.

    PubMed

    Szili-Torok, Tamas; Kimman, Geert Jan P; Scholten, Marcoen F; Ligthart, Jurgen; Bruining, Nico; Theuns, Dominic A M J; Klootwijk, Peter J; Roelandt, Jos R T C; Jordaens, Luc J

    2002-12-18

    Currently, the interatrial septum (IAS) pacing site is indirectly selected by fluoroscopy and P-wave analysis. The aim of the present study was to develop a novel approach for IAS pacing using intracardiac echocardiography (ICE). Interatrial septum pacing may be beneficial for the prevention of paroxysmal atrial fibrillation. Cross-sectional images are acquired during a pull-back of the ICE transducer from the superior vena cava into the inferior vena cava by an electrocardiogram- and respiration-gated technique. Both atria are then reconstructed using three-dimensional (3D) imaging. Using an "en face" view of the IAS, the desired pacing site is selected. Following lead placement and electrical testing, another 3D reconstruction is performed to verify the final lead position. Twelve patients were included in this study. The IAS pacing was achieved in all patients including six suprafossal (SF) and six infrafossal (IF) lead locations all confirmed by 3D imaging. The mean duration times of atrial lead implantation and fluoroscopy were 70 +/- 48.9 min and 23.7 +/- 20.6 min, respectively. The IAS pacing resulted in a significant reduction of the P-wave duration as compared to sinus rhythm (98.9 +/- 19.3 ms vs. 141.3 +/- 8.6 ms; p < 0.002). The SF pacing showed a greater reduction of the P-wave duration than IF pacing (59.4 +/- 6.6 ms vs. 30.2 +/- 13.6 ms; p < 0.004). Three-dimensional ICE is a feasible tool for guiding IAS pacing.

  1. Statistical Exploration of Electronic Structure of Molecules from Quantum Monte-Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, Mr; Zubarev, Dmitry; Lester, Jr., William A.

    In this report, we present results from analysis of Quantum Monte Carlo (QMC) simulation data with the goal of determining internal structure of a 3N-dimensional phase space of an N-electron molecule. We are interested in mining the simulation data for patterns that might be indicative of the bond rearrangement as molecules change electronic states. We examined simulation output that tracks the positions of two coupled electrons in the singlet and triplet states of an H2 molecule. The electrons trace out a trajectory, which was analyzed with a number of statistical techniques. This project was intended to address the following scientificmore » questions: (1) Do high-dimensional phase spaces characterizing electronic structure of molecules tend to cluster in any natural way? Do we see a change in clustering patterns as we explore different electronic states of the same molecule? (2) Since it is hard to understand the high-dimensional space of trajectories, can we project these trajectories to a lower dimensional subspace to gain a better understanding of patterns? (3) Do trajectories inherently lie in a lower-dimensional manifold? Can we recover that manifold? After extensive statistical analysis, we are now in a better position to respond to these questions. (1) We definitely see clustering patterns, and differences between the H2 and H2tri datasets. These are revealed by the pamk method in a fairly reliable manner and can potentially be used to distinguish bonded and non-bonded systems and get insight into the nature of bonding. (2) Projecting to a lower dimensional subspace ({approx}4-5) using PCA or Kernel PCA reveals interesting patterns in the distribution of scalar values, which can be related to the existing descriptors of electronic structure of molecules. Also, these results can be immediately used to develop robust tools for analysis of noisy data obtained during QMC simulations (3) All dimensionality reduction and estimation techniques that we tried seem to indicate that one needs 4 or 5 components to account for most of the variance in the data, hence this 5D dataset does not necessarily lie on a well-defined, low dimensional manifold. In terms of specific clustering techniques, K-means was generally useful in exploring the dataset. The partition around medoids (pam) technique produced the most definitive results for our data showing distinctive patterns for both a sample of the complete data and time-series. The gap statistic with tibshirani criteria did not provide any distinction across the 2 dataset. The gap statistic w/DandF criteria, Model based clustering and hierarchical modeling simply failed to run on our datasets. Thankfully, the vanilla PCA technique was successful in handling our entire dataset. PCA revealed some interesting patterns for the scalar value distribution. Kernel PCA techniques (vanilladot, RBF, Polynomial) and MDS failed to run on the entire dataset, or even a significant fraction of the dataset, and we resorted to creating an explicit feature map followed by conventional PCA. Clustering using K-means and PAM in the new basis set seems to produce promising results. Understanding the new basis set in the scientific context of the problem is challenging, and we are currently working to further examine and interpret the results.« less

  2. Lunar Occultations, Setting the Stage for VLTI: The Case Study of CW-Leo (aka IRC+10216)

    NASA Astrophysics Data System (ADS)

    Käufl, Hans Ulrich; Stecklum, Bringfried; Richter, Steffen; Richichi, Andrea

    Lunar occultation allows for a sneak preview of what the VLTI will observe, both with comparable angular resolution and sensitivity. In the thermal infrared ( λ ≈ 10μ m, angular resolution ≤ 0.03^' ') the technique has been pioneered with TIMMI on La Silla. Using this technique several dust shells around Asymptotic Giant Branch stars have been resolved. For the Carbon star CW-Leo (IRC+10 216) high S/N scans will allow for `11/2-dimensional' imaging of the source. At the present state of data reduction the light curves already provide for a very convincing proof of theories on the milli-arcsec scale. In combination with VLTI the technique allows for checks of the visibility calibration and related issues. Moreover, in the (u,v)-plane both techniques are extremely complementary, so that a merging of the data sets appear highly desirable. At La Silla and Paranal ESO a suite of instruments which can be (ab)used for this project is under construction.

  3. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Dimensional changes of acrylic resin denture bases: conventional versus injection-molding technique.

    PubMed

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-07-01

    Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding.

  5. Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique

    PubMed Central

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-01-01

    Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050

  6. Multivariate Strategies in Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hansen, Lars Kai

    2007-01-01

    We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.

  7. Two component-three dimensional catalysis

    DOEpatents

    Schwartz, Michael; White, James H.; Sammells, Anthony F.

    2002-01-01

    This invention relates to catalytic reactor membranes having a gas-impermeable membrane for transport of oxygen anions. The membrane has an oxidation surface and a reduction surface. The membrane is coated on its oxidation surface with an adherent catalyst layer and is optionally coated on its reduction surface with a catalyst that promotes reduction of an oxygen-containing species (e.g., O.sub.2, NO.sub.2, SO.sub.2, etc.) to generate oxygen anions on the membrane. The reactor has an oxidation zone and a reduction zone separated by the membrane. A component of an oxygen containing gas in the reduction zone is reduced at the membrane and a reduced species in a reactant gas in the oxidation zone of the reactor is oxidized. The reactor optionally contains a three-dimensional catalyst in the oxidation zone. The adherent catalyst layer and the three-dimensional catalyst are selected to promote a desired oxidation reaction, particularly a partial oxidation of a hydrocarbon.

  8. Computer-assisted techniques to evaluate fringe patterns

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1992-01-01

    Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.

  9. Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian; Haller, George

    2018-06-01

    We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.

  10. Indirect three-dimensional printing of synthetic polymer scaffold based on thermal molding process.

    PubMed

    Park, Jeong Hun; Jung, Jin Woo; Kang, Hyun-Wook; Cho, Dong-Woo

    2014-06-01

    One of the major issues in tissue engineering has been the development of three-dimensional (3D) scaffolds, which serve as a structural template for cell growth and extracellular matrix formation. In scaffold-based tissue engineering, 3D printing (3DP) technology has been successfully applied for the fabrication of complex 3D scaffolds by using both direct and indirect techniques. In principle, direct 3DP techniques rely on the straightforward utilization of the final scaffold materials during the actual scaffold fabrication process. In contrast, indirect 3DP techniques use a negative mold based on a scaffold design, to which the desired biomaterial is cast and then sacrificed to obtain the final scaffold. Such indirect 3DP techniques generally impose a solvent-based process for scaffold fabrication, resulting in a considerable increase in the fabrication time and poor mechanical properties. In addition, the internal architecture of the resulting scaffold is affected by the properties of the biomaterial solution. In this study, we propose an advanced indirect 3DP technique using projection-based micro-stereolithography and an injection molding system (IMS) in order to address these challenges. The scaffold was fabricated by a thermal molding process using IMS to overcome the limitation of the solvent-based molding process in indirect 3DP techniques. The results indicate that the thermal molding process using an IMS has achieved a substantial reduction in scaffold fabrication time and has also provided the scaffold with higher mechanical modulus and strength. In addition, cell adhesion and proliferation studies have indicated no significant difference in cell activity between the scaffolds prepared by solvent-based and thermal molding processes.

  11. Surface Heave Behaviour of Coir Geotextile Reinforced Sand Beds

    NASA Astrophysics Data System (ADS)

    Lal, Dharmesh; Sankar, N.; Chandrakaran, S.

    2017-06-01

    Soil reinforcement by natural fibers is one of the cheapest and attractive ground improvement techniques. Coir is the most abundant natural fiber available in India and due to its high lignin content; it has a larger life span than other natural fibers. It is widely used in India for erosion control purposes, but its use as a reinforcement material is rather limited. This study focuses on the use of coir geotextile as a reinforcement material to reduce surface heave phenomena occurring in shallow foundations. This paper presents the results of laboratory model tests carried out on square footings supported on coir geotextile reinforced sand beds. The influence of various parameters such as depth of reinforcement, length, and number of layers of reinforcement was studied. It was observed that surface heave is considerably reduced with the provision of geotextile. Heave reduction up to 98.7% can be obtained by the proposed method. Heave reduction is quantified by a non-dimensional parameter called heave reduction factor.

  12. Optimizing Performance Parameters of Chemically-Derived Graphene/p-Si Heterojunction Solar Cell.

    PubMed

    Batra, Kamal; Nayak, Sasmita; Behura, Sanjay K; Jani, Omkar

    2015-07-01

    Chemically-derived graphene have been synthesized by modified Hummers method and reduced using sodium borohydride. To explore the potential for photovoltaic applications, graphene/p-silicon (Si) heterojunction devices were fabricated using a simple and cost effective technique called spin coating. The SEM analysis shows the formation of graphene oxide (GO) flakes which become smooth after reduction. The absence of oxygen containing functional groups, as observed in FT-IR spectra, reveals the reduction of GO, i.e., reduced graphene oxide (rGO). It was further confirmed by Raman analysis, which shows slight reduction in G-band intensity with respect to D-band. Hall effect measurement confirmed n-type nature of rGO. Therefore, an effort has been made to simu- late rGO/p-Si heterojunction device by using the one-dimensional solar cell capacitance software, considering the experimentally derived parameters. The detail analysis of the effects of Si thickness, graphene thickness and temperature on the performance of the device has been presented.

  13. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less

  14. High-resolution non-destructive three-dimensional imaging of integrated circuits

    NASA Astrophysics Data System (ADS)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  15. A technique for measurement of instantaneous heat transfer in steady-flow ambient-temperature facilities

    NASA Technical Reports Server (NTRS)

    O'Brien, James E.

    1990-01-01

    An experimental technique is described for obtaining time-resolved heat flux measurements with high-frequency response (up to 100 kHz) in a steady-flow ambient-temperature facility. The heat transfer test object is preheated and suddenly injected into an established steady flow. Thin-film gages deposited on the test surface detect the unsteady substrate surface temperature. Analog circuitry designed for use in short-duration facilities and based on one-dimensional semiinfinite heat conduction is used to perform the temperature/heat flux transformation. A detailed description of substrate properties, instrumentation, experimental procedure, and data reduction is given, along with representative results obtained in the stagnation region of a circular cylinder subjected to a wake-dominated unsteady flow. An in-depth discussion of related work is also provided.

  16. A new approach to importance sampling for the simulation of false alarms. [in radar systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1987-01-01

    In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.

  17. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  18. An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.

    PubMed

    Greving, Matthew P

    2011-10-01

    Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.

  19. Two-dimensional cylindrical ion-acoustic solitary and rogue waves in ultrarelativistic plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata-ur-Rahman; National Centre for Physics at QAU Campus, Shahdrah Valley Road, Islamabad 44000; Ali, S.

    2013-07-15

    The propagation of ion-acoustic (IA) solitary and rogue waves is investigated in a two-dimensional ultrarelativistic degenerate warm dense plasma. By using the reductive perturbation technique, the cylindrical Kadomtsev–Petviashvili (KP) equation is derived, which can be further transformed into a Korteweg–de Vries (KdV) equation. The latter admits a solitary wave solution. However, when the frequency of the carrier wave is much smaller than the ion plasma frequency, the KdV equation can be transferred to a nonlinear Schrödinger equation to study the nonlinear evolution of modulationally unstable modified IA wavepackets. The propagation characteristics of the IA solitary and rogue waves are stronglymore » influenced by the variation of different plasma parameters in an ultrarelativistic degenerate dense plasma. The present results might be helpful to understand the nonlinear electrostatic excitations in astrophysical degenerate dense plasmas.« less

  20. Application of a Laser Interferometer Skin-Friction Meter in Complex Flows

    NASA Technical Reports Server (NTRS)

    Monson, D. J.; Driver, D. M.; Szodruch, J.

    1981-01-01

    A nonintrusive skin-friction meter has been found useful for a variety of complex wind-tunnel flows. This meter measures skin friction with a remotely located laser interferometer that monitors the thickness change of a thin oil film. Its accuracy has been proven in a low-speed flat-plate flow. The wind-tunnel flows described here include sub-sonic separated and reattached flow over a rearward-facing step, supersonic flow over a flat plate at high Reynolds numbers, and supersonic three - dimensional vortical flow over the lee of a delta wing at angle of attack. The data-reduction analysis was extended to apply to three-dimensional flows with unknown flow direction, large pressure and shear gradients, and large oil viscosity changes with time. The skin friction measurements were verified, where possible, with results from more conventional techniques and also from theoretical computations.

  1. Low-rank factorization of electron integral tensors and its application in electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    In this letter, we introduce the reverse Cuthill-McKee (RCM) algorithm, which is often used for the bandwidth reduction of sparse tensors, to transform the two-electron integral tensors to their block diagonal forms. By further applying the pivoted Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates the low-rank factorization of the high-dimensional tensor contractions that are usually encountered in post-Hartree-Fock calculations. In this letter, we discuss the second-order Møller-Plesset (MP2) method and linear coupled- cluster model with doublesmore » (L-CCD) as two simple examples to demonstrate the efficiency of the RCM-CD technique in representing two-electron integrals in a compact form.« less

  2. Update on mandibular condylar fracture management.

    PubMed

    Weiss, Joshua P; Sawhney, Raja

    2016-08-01

    Fractures of the mandibular condyle have provided a lasting source of controversy in the field of facial trauma. Concerns regarding facial nerve injury as well as reasonable functional outcomes with closed management led to a reluctance to treat with an open operative intervention. This article reviews how incorporating new technologies and surgical methods have changed the treatment paradigm. Multiple large studies and meta-analyses continue to demonstrate superior outcomes for condylar fractures when managed surgically. Innovations, including endoscopic techniques, three-dimensional miniplates, and angled drills provide increased options in the treatment of condylar fractures. The literature on pediatric condylar fractures is limited and continues to favor a more conservative approach. There continues to be mounting evidence in radiographic, quality of life, and functional outcome studies to support open reduction with internal fixation for the treatment of condylar fractures in patients with malocclusion, significant displacement, or dislocation of the temporomandibular joint. The utilization of three-dimensional trapezoidal miniplates has shown improved outcomes and theoretically enhanced biomechanical properties when compared with traditional fixation with single or double miniplates. Endoscopic-assisted techniques can decrease surgical morbidity, but are technically challenging, require skilled assistants, and utilize specialized equipment.

  3. Three Dimensional Constraint Effects on the Estimated (Delta)CTOD during the Numerical Simulation of Different Fatigue Threshold Testing Techniques

    NASA Technical Reports Server (NTRS)

    Seshadri, Banavara R.; Smith, Stephen W.

    2007-01-01

    Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.

  4. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  5. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  6. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  7. Methods for the preparation of an autologous serum-free cultured epidermis and for autografting applications.

    PubMed

    Wille, John J; Burdge, Jeremy J; Park, Jong Y

    2014-01-01

    Cell culture techniques for producing a three-dimensional autologous epidermal autograft (cultured epidermal autograft) suitable for tissue grafting and wound healing procedures are described. This chapter commences with surgical biopsy of patient's skin tissue, further reduction of skin tissues to keratinocyte cells by enzymatic treatment, and recovery of viable adult keratinocytes in a new balanced buffered salt media supportive of the growth of clonally enriched isolated basal keratinocytes. Culture techniques required for the formation of a hole-free monolayer of undifferentiated basal keratinocytes without the use of an organotypic matrix substrate are accomplished with a specially designed nutrient basal media (HECK 109) that is a chemically defined and subsequent culture in this serum-free culture media supplemented with hormones and two human recombinant protein growth factors (EGF and IGF-1). Further culture techniques and media manipulations, including brief exposure to β-TGF to induce reversible G1-phase growth arrest, are followed by para-synchronous induction of a multilayered stratification and keratinizing epidermal differentiation, yielding a living three-dimensional epidermis formed entirely in cell culture. Protocols are listed for its enzymatic removal, floatation, and transfer for shipment to the clinic ready for surgical grafting to the self-same patient's debrided chronic leg ulcers. Recent clinical trial results have demonstrated the utility and efficacy of these grafts in forming durably healed chronic wounds.

  8. Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.

    PubMed

    Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson

    2018-02-01

    Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.

  9. Signal separation by nonlinear projections: The fetal electrocardiogram

    NASA Astrophysics Data System (ADS)

    Schreiber, Thomas; Kaplan, Daniel T.

    1996-05-01

    We apply a locally linear projection technique which has been developed for noise reduction in deterministically chaotic signals to extract the fetal component from scalar maternal electrocardiographic (ECG) recordings. Although we do not expect the maternal ECG to be deterministic chaotic, typical signals are effectively confined to a lower-dimensional manifold when embedded in delay space. The method is capable of extracting fetal heart rate even when the fetal component and the noise are of comparable amplitude. If the noise is small, more details of the fetal ECG, like P and T waves, can be recovered.

  10. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  11. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  12. Development of technique for three-dimensional visualization of grain boundaries by white X-ray microbeam

    NASA Astrophysics Data System (ADS)

    Kajiwara, K.; Shobu, T.; Toyokawa, H.; Sato, M.

    2014-04-01

    A technique for three-dimensional visualization of grain boundaries was developed at BL28B2 at SPring-8. The technique uses white X-ray microbeam diffraction and a rotating slit. Three-dimensional images of small silicon single crystals filled in a plastic tube were successfully obtained using this technique for demonstration purposes. The images were consistent with those obtained by X-ray computed tomography.

  13. Stable orthogonal local discriminant embedding for linear dimensionality reduction.

    PubMed

    Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin

    2013-07-01

    Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.

  14. Early improvement in left atrial remodeling and function after mitral valve repair or replacement in organic symptomatic mitral regurgitation assessed by three-dimensional echocardiography.

    PubMed

    Le Bihan, David C S; Della Togna, Dorival Julio; Barretto, Rodrigo B M; Assef, Jorge Eduardo; Machado, Lúcia Romero; Ramos, Auristela Isabel de Oliveira; Abdulmassih Neto, Camilo; Moisés, Valdir Ambrosio; Sousa, Amanda G M R; Campos, Orlando

    2015-07-01

    Left atrial (LA) dilation is associated with worse prognosis in various clinical situations including chronic mitral regurgitation (MR). Real time three-dimensional echocardiography (3DE) has allowed a better assessment of LA volumes and function. Little is known about LA size and function in early postoperative period in symptomatic patients with chronic organic MR. We aimed to investigate these aspects. By means of 3DE, 43 patients with symptomatic chronic organic MR were prospectively studied before and 30 days after surgery (repair or bioprosthetic valve replacement). Twenty subjects were studied as controls. Maximum (Vol-max), minimum, and preatrial contraction LA volumes were measured and total, passive, and active LA emptying fractions were calculated. Before surgery patients had higher LA volumes (P < 0.001) but smaller LA emptying fractions than controls (P < 0.01). After surgery there was a reduction in all 3 LA volumes and an increase in active atrial emptying fraction (AAEF). Multivariate analysis showed that independent predictors of early postoperative Vol-max reduction were preoperative diastolic blood pressure (coefficient = -0.004; P = 0.02), lateral mitral annular early diastolic velocity (e') (coefficient = 0.023; P = 0.008), and the mean transmitral diastolic gradient increment (coefficient = -0.035; P < 0.001). Furthermore, e' was also independently associated with AAEF increase (odds ratio = 1.66, P = 0.027). Early LA reverse remodeling and functional improvement occur after successful surgery of symptomatic organic MR regardless of surgical technique. Diastolic blood pressure and transmitral mean gradient augmentation are variables negatively related to Vol-max reduction. Besides, e' is positively correlated with both Vol-max reduction and AAEF increase. © 2014, Wiley Periodicals, Inc.

  15. 3D surface pressure measurement with single light-field camera and pressure-sensitive paint

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth

    2018-05-01

    A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel

    Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less

  17. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  18. Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.

    2003-01-01

    The use of multi-dimensional finite volume numerical techniques with finite thickness models for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the one-dimensional semi -infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody were investigated. An array of streamwise orientated heating striations were generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients due to the striation patterns two-dimensional heat transfer techniques were necessary to obtain accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates because it did not account for lateral heat conduction in the model.

  19. [Application of rational ant colony optimization to improve the reproducibility degree of laser three-dimensional copy].

    PubMed

    Cui, Xiao-Yan; Huo, Zhong-Gang; Xin, Zhong-Hua; Tian, Xiao; Zhang, Xiao-Dong

    2013-07-01

    Three-dimensional (3D) copying of artificial ears and pistol printing are pushing laser three-dimensional copying technique to a new page. Laser three-dimensional scanning is a fresh field in laser application, and plays an irreplaceable part in three-dimensional copying. Its accuracy is the highest among all present copying techniques. Reproducibility degree marks the agreement of copied object with the original object on geometry, being the most important index property in laser three-dimensional copying technique. In the present paper, the error of laser three-dimensional copying was analyzed. The conclusion is that the data processing to the point cloud of laser scanning is the key technique to reduce the error and increase the reproducibility degree. The main innovation of this paper is as follows. On the basis of traditional ant colony optimization, rational ant colony optimization algorithm proposed by the author was applied to the laser three-dimensional copying as a new algorithm, and was put into practice. Compared with customary algorithm, rational ant colony optimization algorithm shows distinct advantages in data processing of laser three-dimensional copying, reducing the error and increasing the reproducibility degree of the copy.

  20. Risk Assessment Using the Three Dimensions of Probability (Likelihood), Severity, and Level of Control

    NASA Technical Reports Server (NTRS)

    Watson, Clifford C.

    2011-01-01

    Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.

  1. Risk Presentation Using the Three Dimensions of Likelihood, Severity, and Level of Control

    NASA Technical Reports Server (NTRS)

    Watson, Clifford

    2010-01-01

    Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the leastwell-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.

  2. Nonlinear dimensionality reduction of CT histogram based feature space for predicting recurrence-free survival in non-small-cell lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2015-03-01

    Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.

  3. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  4. The dimensionality reduction at surfaces as a playground for many-body and correlation effects

    NASA Astrophysics Data System (ADS)

    Tejeda, A.; Michel, E. G.; Mascaraque, A.

    2013-03-01

    Low-dimensional systems have always deserved attention due to the peculiarity of their physics, which is different from or even at odds with three-dimensional expectations. This is precisely the case for many-body effects, as electron-electron correlation or electron-phonon coupling are behind many intriguing problems in condensed matter physics. These interesting phenomena at low dimensions can be studied in one of the paradigms of two dimensionality—the surface of crystals. The maturity of today's surface science techniques allows us to perform thorough experimental studies that can be complemented by the current strength of state-of-the-art calculations. Surfaces are thus a natural two-dimensional playground for studying correlation and many-body effects, which is precisely the object of this special section. This special section presents a collection of eight invited articles, giving an overview of the current status of selected systems, promising techniques and theoretical approaches for studying many-body effects at surfaces and low-dimensional systems. The first article by Hofmann investigates electron-phonon coupling in quasi-free-standing graphene by decoupling graphene from two different substrates with different intercalating materials. The following article by Kirschner deals with the study of NiO films by electron pair emission, a technique particularly well-adapted for studying high electron correlation. Bovensiepen investigates electron-phonon coupling via the femtosecond time- and angle-resolved photoemission spectroscopy technique. The next article by Malterre analyses the phase diagram of alkalis on Si(111):B and studies the role of many-body physics. Biermann proposes an extended Hubbard model for the series of C, Si, Sn and Pb adatoms on Si(111) and obtains the inter-electronic interaction parameters by first principles. Continuing with the theoretical studies, Bechstedt analyses the influence of on-site electron correlation in insulating antiferromagnetic surfaces. Ortega reports on the gap of molecular layers on metal systems, where the metal-organic interaction affects the organic gap through correlation effects. Finally, Cazalilla presents a study of the phase diagram of one-dimensional atoms or molecules displaying a Kondo-exchange interaction with the substrate. Acknowledgments The editors are grateful to all the invited contributors to this special section of Journal of Physics: Condensed Matter. We also thank the IOP Publishing staff for handling the administrative matters and the refereeing process. Correlation and many-body effects at surfaces contents The dimensionality reduction at surfaces as a playground for many-body and correlation effectsA Tejeda, E G Michel and A Mascaraque Electron-phonon coupling in quasi-free-standing grapheneJens Christian Johannsen, Søren Ulstrup, Marco Bianchi, Richard Hatch, Dandan Guan, Federico Mazzola, Liv Hornekær, Felix Fromm, Christian Raidel, Thomas Seyller and Philip Hofmann Exploring highly correlated materials via electron pair emission: the case of NiO/Ag(100)F O Schumann, L Behnke, C H Li and J Kirschner Coherent excitations and electron-phonon coupling in Ba/EuFe2As2 compounds investigated by femtosecond time- and angle-resolved photoemission spectroscopyI Avigo, R Cortés, L Rettig, S Thirupathaiah, H S Jeevan, P Gegenwart, T Wolf, M Ligges, M Wolf, J Fink and U Bovensiepen Understanding the insulating nature of alkali-metal/Si(111):B interfacesY Fagot-Revurat, C Tournier-Colletta, L Chaput, A Tejeda, L Cardenas, B Kierren, D Malterre, P Le Fèvre, F Bertran and A Taleb-Ibrahimi What about U on surfaces? Extended Hubbard models for adatom systems from first principlesPhilipp Hansmann, Loïg Vaugier, Hong Jiang and Silke Biermann Influence of on-site Coulomb interaction U on properties of MnO(001)2 × 1 and NiO(001)2 × 1 surfacesA Schrön, M Granovskij and F Bechstedt On the organic energy gap problemF Flores, E Abad, J I Martínez, B Pieczyrak and J Ortega Easy-axis ferromagnetic chain on a metallic surfaceMiguel A Cazalilla

  5. Dimensionality reduction based on distance preservation to local mean for symmetric positive definite matrices and its application in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh

    2017-06-01

    Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.

  6. On the precision of quasi steady state assumptions in stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.

    2012-07-01

    Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.

  7. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    PubMed

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  8. Periodic orbit analysis of a system with continuous symmetry--A tutorial.

    PubMed

    Budanur, Nazmi Burak; Borrero-Echeverry, Daniel; Cvitanović, Predrag

    2015-07-01

    Dynamical systems with translational or rotational symmetry arise frequently in studies of spatially extended physical systems, such as Navier-Stokes flows on periodic domains. In these cases, it is natural to express the state of the fluid in terms of a Fourier series truncated to a finite number of modes. Here, we study a 4-dimensional model with chaotic dynamics and SO(2) symmetry similar to those that appear in fluid dynamics problems. A crucial step in the analysis of such a system is symmetry reduction. We use the model to illustrate different symmetry-reduction techniques. The system's relative equilibria are conveniently determined by rewriting the dynamics in terms of a symmetry-invariant polynomial basis. However, for the analysis of its chaotic dynamics, the "method of slices," which is applicable to very high-dimensional problems, is preferable. We show that a Poincaré section taken on the "slice" can be used to further reduce this flow to what is for all practical purposes a unimodal map. This enables us to systematically determine all relative periodic orbits and their symbolic dynamics up to any desired period. We then present cycle averaging formulas adequate for systems with continuous symmetry and use them to compute dynamical averages using relative periodic orbits. The convergence of such computations is discussed.

  9. Three-Dimensional Printing of pH-Responsive and Functional Polymers on an Affordable Desktop Printer.

    PubMed

    Nadgorny, Milena; Xiao, Zeyun; Chen, Chao; Connal, Luke A

    2016-10-26

    In this work we describe the synthesis, thermal and rheological characterization, hot-melt extrusion, and three-dimensional printing (3DP) of poly(2-vinylpyridine) (P2VP). We investigate the effect of thermal processing conditions on physical properties of produced filaments in order to achieve high quality, 3D-printable filaments for material extrusion 3DP (ME3DP). Mechanical properties and processing performances of P2VP were enhanced by addition of 12 wt % acrylonitrile-butadiene-styrene (ABS), which reinforced P2VP fibers. We 3D-print P2VP filaments using an affordable 3D printer. The pyridine moieties are cross-linked and quaternized postprinting to form 3D-printed pH-responsive hydrogels. The printed objects exhibited dynamic and reversible pH-dependent swelling. These hydrogels act as flow-regulating valves, controlling the flow rate with pH. Additionally, a macroporous P2VP membrane was 3D-printed and the coordinating ability of the pyridyl groups was employed to immobilize silver precursors on its surface. After the reduction of silver ions, the structure was used to catalyze the reduction of 4-nitrophenol to 4-aminophenol with a high efficiency. This is a facile technique to print recyclable catalytic objects.

  10. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  11. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    PubMed Central

    Deng, Zhongliang; Zhu, Di

    2018-01-01

    The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224

  12. PCA based clustering for brain tumor segmentation of T1w MRI images.

    PubMed

    Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay

    2017-03-01

    Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  14. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  15. Interpretation of FTIR spectra of polymers and Raman spectra of car paints by means of likelihood ratio approach supported by wavelet transform for reducing data dimensionality.

    PubMed

    Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz

    2015-05-01

    The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.

  16. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  17. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony.

    PubMed

    Zhang, J W; Rangan, A V

    2015-04-01

    In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.

  18. Resolution enhancement in integral microscopy by physical interpolation.

    PubMed

    Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel

    2015-08-01

    Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.

  19. Resolution enhancement in integral microscopy by physical interpolation

    PubMed Central

    Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel

    2015-01-01

    Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens. PMID:26309749

  20. Utilization of volume correlation filters for underwater mine identification in LIDAR imagery

    NASA Astrophysics Data System (ADS)

    Walls, Bradley

    2008-04-01

    Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.

  1. Position specific interaction dependent scoring technique for virtual screening based on weighted protein--ligand interaction fingerprint profiles.

    PubMed

    Nandigam, Ravi K; Kim, Sangtae; Singh, Juswinder; Chuaqui, Claudio

    2009-05-01

    The desire to exploit structural information to aid structure based design and virtual screening led to the development of the interaction fingerprint for analyzing, mining, and filtering the binding patterns underlying the complex 3D data. In this paper we introduce a new approach, weighted SIFt (or w-SIFt), extending the concept of SIFt to capture the relative importance of different binding interactions. The methodology presented here for determining the weights in w-SIFt involves utilizing a dimensionality reduction technique for eliminating linear redundancies in the data followed by a stochastic optimization. We find that the relative weights of the fingerprint bits provide insight into what interactions are critical in determining inhibitor potency. Moreover, the weighted interaction fingerprint can serve as an interpretable position dependent scoring function for ligand protein interactions.

  2. Understanding decay resistance, dimensional stability and strength changes in heat treated and acetylated wood

    Treesearch

    Roger M. Rowell; Rebecca E. Ibach; James McSweeny; Thomas Nilsson

    2009-01-01

    Reductions in hygroscopicity, increased dimensional stability and decay resistance of heat-treated wood depend on decomposition of a large portion of the hemicelluloses in the wood cell wall. In theory, these hemicelluloses are converted to small organic molecules, water and volatile furan-type intermediates that can polymerize in the cell wall. Reductions in...

  3. One-dimensional manganese-cobalt oxide nanofibres as bi-functional cathode catalysts for rechargeable metal-air batteries

    PubMed Central

    Jung, Kyu-Nam; Hwang, Soo Min; Park, Min-Sik; Kim, Ki Jae; Kim, Jae-Geun; Dou, Shi Xue; Kim, Jung Ho; Lee, Jong-Won

    2015-01-01

    Rechargeable metal-air batteries are considered a promising energy storage solution owing to their high theoretical energy density. The major obstacles to realising this technology include the slow kinetics of oxygen reduction and evolution on the cathode (air electrode) upon battery discharging and charging, respectively. Here, we report non-precious metal oxide catalysts based on spinel-type manganese-cobalt oxide nanofibres fabricated by an electrospinning technique. The spinel oxide nanofibres exhibit high catalytic activity towards both oxygen reduction and evolution in an alkaline electrolyte. When incorporated as cathode catalysts in Zn-air batteries, the fibrous spinel oxides considerably reduce the discharge-charge voltage gaps (improve the round-trip efficiency) in comparison to the catalyst-free cathode. Moreover, the nanofibre catalysts remain stable over the course of repeated discharge-charge cycling; however, carbon corrosion in the catalyst/carbon composite cathode degrades the cycling performance of the batteries. PMID:25563733

  4. Analysis of dual-frequency MEMS antenna using H-MRTD method

    NASA Astrophysics Data System (ADS)

    Yu, Wenge; Zhong, Xianxin; Chen, Yu; Wu, Zhengzhong

    2004-10-01

    For applying micro/nano technologies and Micro-Electro-Mechanical System (MEMS) technologies in the Radio Frequency (RF) field to manufacture miniature microstrip antennas. A novel MEMS dual-band patch antenna designed using slot-loaded and short-circuited size-reduction techniques is presented in this paper. By controlling the short-plane width, the two resonant frequencies, f10 and f30, can be significantly reduced and the frequency ratio (f30/f10) is tunable in the range 1.7~2.3. The Haar-Wavelet-Based multiresolution time domain (H-MRTD) with compactly supported scaling function for a full three-dimensional (3-D) wave to Yee's staggered cell is used for modeling and analyzing the antenna for the first time. Associated with practical model, an uniaxial perfectly matched layer (UPML) absorbing boundary conditions was developed, In addition , extending the mathematical formulae to an inhomogenous media. Numerical simulation results are compared with those using the conventional 3-D finite-difference time-domain (FDTD) method and measured. It has been demonstrated that, with this technique, space discretization with only a few cells per wavelength gives accurate results, leading to a reduction of both memory requirement and computation time.

  5. Advantage of four-electrode over two-electrode defibrillators

    NASA Astrophysics Data System (ADS)

    Bragard, J.; Šimić, A.; Laroze, D.; Elorza, J.

    2015-12-01

    Defibrillation is the standard clinical treatment used to stop ventricular fibrillation. An electrical device delivers a controlled amount of electrical energy via a pair of electrodes in order to reestablish a normal heart rate. We propose a technique that is a combination of biphasic shocks applied with a four-electrode system rather than the standard two-electrode system. We use a numerical model of a one-dimensional ring of cardiac tissue in order to test and evaluate the benefit of this technique. We compare three different shock protocols, namely a monophasic and two types of biphasic shocks. The results obtained by using a four-electrode system are compared quantitatively with those obtained with the standard two-electrode system. We find that a huge reduction in defibrillation threshold is achieved with the four-electrode system. For the most efficient protocol (asymmetric biphasic), we obtain a reduction in excess of 80% in the energy required for a defibrillation success rate of 90%. The mechanisms of successful defibrillation are also analyzed. This reveals that the advantage of asymmetric biphasic shocks with four electrodes lies in the duration of the cathodal and anodal phase of the shock.

  6. Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.

    2006-01-01

    The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.

  7. Identifying Nanoscale Structure-Function Relationships Using Multimodal Atomic Force Microscopy, Dimensionality Reduction, and Regression Techniques.

    PubMed

    Kong, Jessica; Giridharagopal, Rajiv; Harrison, Jeffrey S; Ginger, David S

    2018-05-31

    Correlating nanoscale chemical specificity with operational physics is a long-standing goal of functional scanning probe microscopy (SPM). We employ a data analytic approach combining multiple microscopy modes, using compositional information in infrared vibrational excitation maps acquired via photoinduced force microscopy (PiFM) with electrical information from conductive atomic force microscopy. We study a model polymer blend comprising insulating poly(methyl methacrylate) (PMMA) and semiconducting poly(3-hexylthiophene) (P3HT). We show that PiFM spectra are different from FTIR spectra, but can still be used to identify local composition. We use principal component analysis to extract statistically significant principal components and principal component regression to predict local current and identify local polymer composition. In doing so, we observe evidence of semiconducting P3HT within PMMA aggregates. These methods are generalizable to correlated SPM data and provide a meaningful technique for extracting complex compositional information that are impossible to measure from any one technique.

  8. Fractional exclusion and braid statistics in one dimension: a study via dimensional reduction of Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Ye, Fei; Marchetti, P. A.; Su, Z. B.; Yu, L.

    2017-09-01

    The relation between braid and exclusion statistics is examined in one-dimensional systems, within the framework of Chern-Simons statistical transmutation in gauge invariant form with an appropriate dimensional reduction. If the matter action is anomalous, as for chiral fermions, a relation between braid and exclusion statistics can be established explicitly for both mutual and nonmutual cases. However, if it is not anomalous, the exclusion statistics of emergent low energy excitations is not necessarily connected to the braid statistics of the physical charged fields of the system. Finally, we also discuss the bosonization of one-dimensional anyonic systems through T-duality. Dedicated to the memory of Mario Tonin.

  9. Reconstruction of dynamic image series from undersampled MRI data using data-driven model consistency condition (MOCCO).

    PubMed

    Velikina, Julia V; Samsonov, Alexey A

    2015-11-01

    To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.

  10. RECONSTRUCTION OF DYNAMIC IMAGE SERIES FROM UNDERSAMPLED MRI DATA USING DATA-DRIVEN MODEL CONSISTENCY CONDITION (MOCCO)

    PubMed Central

    Velikina, Julia V.; Samsonov, Alexey A.

    2014-01-01

    Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724

  11. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection

    NASA Astrophysics Data System (ADS)

    Aytaç Korkmaz, Sevcan; Binol, Hamidullah

    2018-03-01

    Patients who die from stomach cancer are still present. Early diagnosis is crucial in reducing the mortality rate of cancer patients. Therefore, computer aided methods have been developed for early detection in this article. Stomach cancer images were obtained from Fırat University Medical Faculty Pathology Department. The Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) features of these images are calculated. At the same time, Sammon mapping, Stochastic Neighbor Embedding (SNE), Isomap, Classical multidimensional scaling (MDS), Local Linear Embedding (LLE), Linear Discriminant Analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Laplacian Eigenmaps methods are used for dimensional the reduction of the features. The high dimension of these features has been reduced to lower dimensions using dimensional reduction methods. Artificial neural networks (ANN) and Random Forest (RF) classifiers were used to classify stomach cancer images with these new lower feature sizes. New medical systems have developed to measure the effects of these dimensions by obtaining features in different dimensional with dimensional reduction methods. When all the methods developed are compared, it has been found that the best accuracy results are obtained with LBP_MDS_ANN and LBP_LLE_ANN methods.

  12. Digital dissection and three-dimensional interactive models of limb musculature in the Australian estuarine crocodile (Crocodylus porosus)

    PubMed Central

    Wilhite, D. Ray; White, Matt A.; Wroe, Stephen

    2017-01-01

    Digital dissection is a relatively new technique that has enabled scientists to gain a better understanding of vertebrate anatomy. It can be used to rapidly disseminate detailed, three-dimensional information in an easily accessible manner that reduces the need for destructive, traditional dissections. Here we present the results of a digital dissection on the appendicular musculature of the Australian estuarine crocodile (Crocodylus porosus). A better understanding of this until now poorly known system in C. porosus is important, not only because it will expand research into crocodilian locomotion, but because of its potential to inform muscle reconstructions in dinosaur taxa. Muscles of the forelimb and hindlimb are described and three-dimensional interactive models are included based on CT and MRI scans as well as fresh-tissue dissections. Differences in the arrangement of musculature between C. porosus and other groups within the Crocodylia were found. In the forelimb, differences are restricted to a single tendon of origin for triceps longus medialis. For the hindlimb, a reduction in the number of heads of ambiens was noted as well as changes to the location of origin and insertion for iliofibularis and gastrocnemius externus. PMID:28384201

  13. Incremental value of live/real time three-dimensional transesophageal echocardiography over the two-dimensional technique in the assessment of primary cardiac malignant fibrous histiocytoma.

    PubMed

    Gok, Gulay; Elsayed, Mahmoud; Thind, Munveer; Uygur, Begum; Abtahi, Firoozeh; Chahwala, Jugal R; Yıldırımtürk, Özlem; Kayacıoğlu, İlyas; Pehlivanoğlu, Seçkin; Nanda, Navin C

    2015-07-01

    We describe a case of primary cardiac malignant fibrous histiocytoma where live/real time three-dimensional transesophageal echocardiography added incremental value to the two-dimensional modalities. Specifically, the three-dimensional technique allowed us to delineate the true extent and infiltration of the tumor, to identify characteristics of the tumor mass suggestive of its malignant nature, and to quantitatively assess the total tumor burden. © 2015, Wiley Periodicals, Inc.

  14. Nonplanar KdV and KP equations for quantum electron-positron-ion plasma

    NASA Astrophysics Data System (ADS)

    Dutta, Debjit

    2015-12-01

    Nonlinear quantum ion-acoustic waves with the effects of nonplanar cylindrical geometry, quantum corrections, and transverse perturbations are studied. By using the standard reductive perturbation technique, a cylindrical Kadomtsev-Petviashvili equation for ion-acoustic waves is derived by incorporating quantum-mechanical effects. The quantum-mechanical effects via quantum diffraction and quantum statistics and the role of transverse perturbations in cylindrical geometry on the dynamics of this wave are studied analytically. It is found that the dynamics of ion-acoustic solitary waves (IASWs) is governed by a three-dimensional cylindrical Kadomtsev-Petviashvili equation (CKPE). The results could help in a theoretical analysis of astrophysical and laser produced plasmas.

  15. Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.

    PubMed

    Okoshi, T; Oshima, K

    1976-04-01

    In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.

  16. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  17. Higher-dimensional Bianchi type-VIh cosmologies

    NASA Astrophysics Data System (ADS)

    Lorenz-Petzold, D.

    1985-09-01

    The higher-dimensional perfect fluid equations of a generalization of the (1 + 3)-dimensional Bianchi type-VIh space-time are discussed. Bianchi type-V and Bianchi type-III space-times are also included as special cases. It is shown that the Chodos-Detweiler (1980) mechanism of cosmological dimensional-reduction is possible in these cases.

  18. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  19. Three-dimensional imaging and remote sensing imaging; Proceedings of the Meeting, Los Angeles, CA, Jan. 14, 15, 1988

    NASA Astrophysics Data System (ADS)

    Robbins, Woodrow E.

    1988-01-01

    The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.

  20. One-dimensional, two-dimensional, and three-dimensional photonic crystals fabricated with interferometric techniques on ultrafine-grain silver halide emulsions

    NASA Astrophysics Data System (ADS)

    Ulibarrena, Manuel; Carretero, Luis; Acebal, Pablo; Madrigal, Roque; Blaya, Salvador; Fimia, Antonio

    2004-09-01

    Holographic techniques have been used for manufacturing multiple band one-dimensional, two-dimensional, and three-dimensional photonic crystals with different configurations, by multiplexing reflection and transmission setups on a single layer of holographic material. The recording material used for storage is an ultra fine grain silver halide emulsion, with an average grain size around 20 nm. The results are a set of photonic crystals with the one-dimensional, two-dimensional, and three-dimensional index modulation structure consisting of silver halide particles embedded in the gelatin layer of the emulsion. The characterisation of the fabricated photonic crystals by measuring their transmission band structures has been done and compared with theoretical calculations.

  1. In-vivo third-harmonic generation microscopy at 1550nm three-dimensional long-term time-lapse studies in living C. elegans embryos

    NASA Astrophysics Data System (ADS)

    Aviles-Espinosa, Rodrigo; Santos, Susana I. C. O.; Brodschelm, Andreas; Kaenders, Wilhelm G.; Alonso-Ortega, Cesar; Artigas, David; Loza-Alvarez, Pablo

    2011-03-01

    In-vivo microscopic long term time-lapse studies require controlled imaging conditions to preserve sample viability. Therefore it is crucial to meet specific exposure conditions as these may limit the applicability of established techniques. In this work we demonstrate the use of third harmonic generation (THG) microscopy for long term time-lapse three-dimensional studies (4D) in living Caenorhabditis elegans embryos employing a 1550 nm femtosecond fiber laser. We take advantage of the fact that THG only requires the existence of interfaces to generate signal or a change in the refractive index or in the χ3 nonlinear coefficient, therefore no markers are required. In addition, by using this wavelength the emitted THG signal is generated at visible wavelengths (516 nm) enabling the use of standard collection optics and detectors operating near their maximum efficiency. This enables the reduction of the incident light intensity at the sample plane allowing to image the sample for several hours. THG signal is obtained through all embryo development stages, providing different tissue/structure information. By means of control samples, we demonstrate that the expected water absorption at this wavelength does not severely compromise sample viability. Certainly, this technique reduces the complexity of sample preparation (i.e. genetic modification) required by established linear and nonlinear fluorescence based techniques. We demonstrate the non-invasiveness, reduced specimen interference, and strong potential of this particular wavelength to be used to perform long-term 4D recordings.

  2. Distal radius osteotomy with volar locking plates based on computer simulation.

    PubMed

    Miyake, Junichi; Murase, Tsuyoshi; Moritomo, Hisao; Sugamoto, Kazuomi; Yoshikawa, Hideki

    2011-06-01

    Corrective osteotomy using dorsal plates and structural bone graft usually has been used for treating symptomatic distal radius malunions. However, the procedure is technically demanding and requires an extensive dorsal approach. Residual deformity is a relatively frequent complication of this technique. We evaluated the clinical applicability of a three-dimensional osteotomy using computer-aided design and manufacturing techniques with volar locking plates for distal radius malunions. Ten patients with metaphyseal radius malunions were treated. Corrective osteotomy was simulated with the help of three-dimensional bone surface models created using CT data. We simulated the most appropriate screw holes in the deformed radius using computer-aided design data of a locking plate. During surgery, using a custom-made surgical template, we predrilled the screw holes as simulated. After osteotomy, plate fixation using predrilled screw holes enabled automatic reduction of the distal radial fragment. Autogenous iliac cancellous bone was grafted after plate fixation. The median volar tilt, radial inclination, and ulnar variance improved from -20°, 13°, and 6 mm, respectively, before surgery to 12°, 24°, and 1 mm, respectively, after surgery. The median wrist flexion improved from 33° before surgery to 60° after surgery. The median wrist extension was 70° before surgery and 65° after surgery. All patients experienced wrist pain before surgery, which disappeared or decreased after surgery. Surgeons can operate precisely and easily using this advanced technique. It is a new treatment option for malunion of distal radius fractures.

  3. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  4. Fringe pattern demodulation with a two-dimensional digital phase-locked loop algorithm.

    PubMed

    Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-09-10

    A novel technique called a two-dimensional digital phase-locked loop (DPLL) for fringe pattern demodulation is presented. This algorithm is more suitable for demodulation of fringe patterns with varying phase in two directions than the existing DPLL techniques that assume that the phase of the fringe patterns varies only in one direction. The two-dimensional DPLL technique assumes that the phase of a fringe pattern is continuous in both directions and takes advantage of the phase continuity; consequently, the algorithm has better noise performance than the existing DPLL schemes. The two-dimensional DPLL algorithm is also suitable for demodulation of fringe patterns with low sampling rates, and it outperforms the Fourier fringe analysis technique in this aspect.

  5. Econo-ESA in semantic text similarity.

    PubMed

    Rahutomo, Faisal; Aritsugi, Masayoshi

    2014-01-01

    Explicit semantic analysis (ESA) utilizes an immense Wikipedia index matrix in its interpreter part. This part of the analysis multiplies a large matrix by a term vector to produce a high-dimensional concept vector. A similarity measurement between two texts is performed between two concept vectors with numerous dimensions. The cost is expensive in both interpretation and similarity measurement steps. This paper proposes an economic scheme of ESA, named econo-ESA. We investigate two aspects of this proposal: dimensional reduction and experiments with various data. We use eight recycling test collections in semantic text similarity. The experimental results show that both the dimensional reduction and test collection characteristics can influence the results. They also show that an appropriate concept reduction of econo-ESA can decrease the cost with minor differences in the results from the original ESA.

  6. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  7. Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping

    PubMed Central

    Yarnykh, Vasily L.

    2015-01-01

    Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097

  8. Three-dimensional nanostructure determination from a large diffraction data set recorded using scanning electron nanodiffraction.

    PubMed

    Meng, Yifei; Zuo, Jian-Min

    2016-09-01

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can be extended to multiphase nanocrystalline materials as well. Thus, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.

  9. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?

    NASA Astrophysics Data System (ADS)

    Troisi, Antonio

    2017-03-01

    Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.

  11. Two- and three-dimensional accuracy of dental impression materials: effects of storage time and moisture contamination.

    PubMed

    Chandran, Deepa T; Jagger, Daryll C; Jagger, Robert G; Barbour, Michele E

    2010-01-01

    Dental impression materials are used to create an inverse replica of the dental hard and soft tissues, and are used in processes such as the fabrication of crowns and bridges. The accuracy and dimensional stability of impression materials are of paramount importance to the accuracy of fit of the resultant prosthesis. Conventional methods for assessing the dimensional stability of impression materials are two-dimensional (2D), and assess shrinkage or expansion between selected fixed points on the impression. In this study, dimensional changes in four impression materials were assessed using an established 2D and an experimental three-dimensional (3D) technique. The former involved measurement of the distance between reference points on the impression; the latter a contact scanning method for producing a computer map of the impression surface showing localised expansion, contraction and warpage. Dimensional changes were assessed as a function of storage times and moisture contamination comparable to that found in clinical situations. It was evident that dimensional changes observed using the 3D technique were not always apparent using the 2D technique, and that the former offers certain advantages in terms of assessing dimensional accuracy and predictability of impression methods. There are, however, drawbacks associated with 3D techniques such as the more time-consuming nature of the data acquisition and difficulty in statistically analysing the data.

  12. Dimensional Accuracy of Hydrophilic and Hydrophobic VPS Impression Materials Using Different Impression Techniques - An Invitro Study

    PubMed Central

    Pilla, Ajai; Pathipaka, Suman

    2016-01-01

    Introduction The dimensional stability of the impression material could have an influence on the accuracy of the final restoration. Vinyl Polysiloxane Impression materials (VPS) are most frequently used as the impression material in fixed prosthodontics. As VPS is hydrophobic when it is poured with gypsum products, manufacturers added intrinsic surfactants and marketed as hydrophilic VPS. These hydrophilic VPS have shown increased wettability with gypsum slurries. VPS are available in different viscosities ranging from very low to very high for usage under different impression techniques. Aim To compare the dimensional accuracy of hydrophilic VPS and hydrophobic VPS using monophase, one step and two step putty wash impression techniques. Materials and Methods To test the dimensional accuracy of the impression materials a stainless steel die was fabricated as prescribed by ADA specification no. 19 for elastomeric impression materials. A total of 60 impressions were made. The materials were divided into two groups, Group1 hydrophilic VPS (Aquasil) and Group 2 hydrophobic VPS (Variotime). These were further divided into three subgroups A, B, C for monophase, one-step and two-step putty wash technique with 10 samples in each subgroup. The dimensional accuracy of the impressions was evaluated after 24 hours using vertical profile projector with lens magnification range of 20X-125X illumination. The study was analyzed through one-way ANOVA, post-hoc Tukey HSD test and unpaired t-test for mean comparison between groups. Results Results showed that the three different impression techniques (monophase, 1-step, 2-step putty wash techniques) did cause significant change in dimensional accuracy between hydrophilic VPS and hydrophobic VPS impression materials. One-way ANOVA disclosed, mean dimensional change and SD for hydrophilic VPS varied between 0.56% and 0.16%, which were low, suggesting hydrophilic VPS was satisfactory with all three impression techniques. However, mean dimensional change and SD for hydrophobic VPS were much higher with monophase, mere increase for 1-step and 2-step, than the standard steel die (p<0.05). Unpaired t-test displayed that hydrophilic VPS judged satisfactory compared to hydrophobic VPS among 1-step and 2-step impression technique. Conclusion Within the limitations of this study, it can be concluded that hydrophilic Vinyl polysiloxane was more dimensionally accurate than hydrophobic Vinyl polysiloxane using monophase, one step and two step putty wash impression techniques under moist conditions. PMID:27042587

  13. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  14. Post-extraction mesio-distal gap reduction assessment by confocal laser scanning microscopy - a clinical 3-month follow-up study.

    PubMed

    García-Herraiz, Ariadna; Silvestre, Francisco Javier; Leiva-García, Rafael; Crespo-Abril, Fortunato; García-Antón, José

    2017-05-01

    The aim of this 3-month follow-up study is to quantify the reduction in the mesio-distal gap dimension (MDGD) that occurs after tooth extraction through image analysis of three-dimensional images obtained with the confocal laser scanning microscopy (CLSM) technique. Following tooth extraction, impressions of 79 patients 1 month and 72 patients 3 months after tooth extraction were obtained. Cast models were processed by CLSM, and MDGD changes between time points were measured. The mean mesio-distal gap reduction 1 month after tooth extraction was 343.4 μm and 3 months after tooth extraction was 672.3 μm. The daily mean gap reduction rate during the first term (between baseline and 1 month post-extraction measurements) was 10.3 μm/day and during the second term (between 1 and 3 months) was 5.4 μm/day. The mesio-distal gap reduction is higher during the first month following the extraction and continues in time, but to a lesser extent. When the inter-dental contacts were absent, the mesio-distal gap reduction is lower. When a molar tooth is extracted or the distal tooth to the edentulous space does not occlude with an antagonist, the mesio-distal gap reduction is larger. The consideration of mesio-distal gap dimension changes can help improve dental treatment planning. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Wake Management Strategies for Reduction of Turbomachinery Fan Noise

    NASA Technical Reports Server (NTRS)

    Waitz, Ian A.

    1998-01-01

    The primary objective of our work was to evaluate and test several wake management schemes for the reduction of turbomachinery fan noise. Throughout the course of this work we relied on several tools. These include 1) Two-dimensional steady boundary-layer and wake analyses using MISES (a thin-shear layer Navier-Stokes code), 2) Two-dimensional unsteady wake-stator interaction simulations using UNSFLO, 3) Three-dimensional, steady Navier-Stokes rotor simulations using NEWT, 4) Internal blade passage design using quasi-one-dimensional passage flow models developed at MIT, 5) Acoustic modeling using LINSUB, 6) Acoustic modeling using VO72, 7) Experiments in a low-speed cascade wind-tunnel, and 8) ADP fan rig tests in the MIT Blowdown Compressor.

  16. Process-Structure Linkages Using a Data Science Approach: Application to Simulated Additive Manufacturing Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi

    A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less

  17. Kadomtsev-Petviashvili solitons propagation in a plasma system with superthermal and weakly relativistic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafeez-Ur-Rehman; Mahmood, S.; Department of Physics and Applied Mathematics, PIEAS, Nilore, 44000 Islamabad

    2011-12-15

    Two dimensional (2D) solitons are studied in a plasma system comprising of relativistically streaming ions, kappa distributed electrons, and positrons. Kadomtsev-Petviashvili (KP) equation is derived through the reductive perturbation technique. Analytical solution of the KP equation has been studied numerically and graphically. It is noticed that kappa parameters of electrons and positrons as well as the ions relativistic streaming factor have an emphatic influence on the structural as well as propagation characteristics of two dimensional solitons in the considered plasma system. Our results may be helpful in the understanding of soliton propagation in astrophysical and laboratory plasmas, specifically the interactionmore » of pulsar relativistic wind with supernova ejecta and the transfer of energy to plasma by intense electric field of laser beams producing highly energetic superthermal and relativistic particles [L. Arons, Astrophys. Space Sci. Lib. 357, 373 (2009); P. Blasi and E. Amato, Astrophys. Space Sci. Proc. 2011, 623; and A. Shah and R. Saeed, Plasma Phys. Controlled Fusion 53, 095006 (2011)].« less

  18. Frequency and phase synchronization in large groups: Low dimensional description of synchronized clapping, firefly flashing, and cricket chirping

    NASA Astrophysics Data System (ADS)

    Ott, Edward; Antonsen, Thomas M.

    2017-05-01

    A common observation is that large groups of oscillatory biological units often have the ability to synchronize. A paradigmatic model of such behavior is provided by the Kuramoto model, which achieves synchronization through coupling of the phase dynamics of individual oscillators, while each oscillator maintains a different constant inherent natural frequency. Here we consider the biologically likely possibility that the oscillatory units may be capable of enhancing their synchronization ability by adaptive frequency dynamics. We propose a simple augmentation of the Kuramoto model which does this. We also show that, by the use of a previously developed technique [Ott and Antonsen, Chaos 18, 037113 (2008)], it is possible to reduce the resulting dynamics to a lower dimensional system for the macroscopic evolution of the oscillator ensemble. By employing this reduction, we investigate the dynamics of our system, finding a characteristic hysteretic behavior and enhancement of the quality of the achieved synchronization.

  19. A Detector Scenario for a Muon Cooling Demonstration Experiment

    NASA Astrophysics Data System (ADS)

    McDonald, Kirk T.; Lu, Changguo; Prebys, Eric J.

    1998-04-01

    As a verification of the concept of ionization cooling of a muon beam, the Muon Collider Collaboration is planning an experiment to cool the 6-dimensional normalized emittance by a factor of two. We have designed a detector system to measure the 6-dimensional emittance before and after the cooling apparatus. To avoid the cost associated with preparation of a muon beam bunched at 800 MHz, the nominal frequency of the RF in the muon cooler, we propose to use an unbunched muon beam. Muons will be measured in the detector individually, and a subset chosen corresponding to an ideal input bunch. The muons are remeasured after the cooling apparatus and the output bunch emittance calculated to show the expected reduction in phase-space volume. The technique of tracing individual muons will reproduce all effects encountered by a bunch except for space-charge.

  20. Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.

    PubMed

    Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz

    2014-04-21

    We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.

  1. Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics.

    PubMed

    Yang, Jian; Zhang, David; Yang, Jing-Yu; Niu, Ben

    2007-04-01

    This paper develops an unsupervised discriminant projection (UDP) technique for dimensionality reduction of high-dimensional data in small sample size cases. UDP can be seen as a linear approximation of a multimanifolds-based learning framework which takes into account both the local and nonlocal quantities. UDP characterizes the local scatter as well as the nonlocal scatter, seeking to find a projection that simultaneously maximizes the nonlocal scatter and minimizes the local scatter. This characteristic makes UDP more intuitive and more powerful than the most up-to-date method, Locality Preserving Projection (LPP), which considers only the local scatter for clustering or classification tasks. The proposed method is applied to face and palm biometrics and is examined using the Yale, FERET, and AR face image databases and the PolyU palmprint database. The experimental results show that UDP consistently outperforms LPP and PCA and outperforms LDA when the training sample size per class is small. This demonstrates that UDP is a good choice for real-world biometrics applications.

  2. Process-Structure Linkages Using a Data Science Approach: Application to Simulated Additive Manufacturing Data

    DOE PAGES

    Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi; ...

    2017-03-13

    A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less

  3. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  4. Stress wave techniques for determining quality of dimensional lumber from switch ties

    Treesearch

    K. C. Schad; D. E. Kretschmann; K. A. McDonald; R. J. Ross; D. W. Green

    1995-01-01

    Researchers at the Forest Products Laboratory, USDA Forest Service, have been studying nondestructive techniques for evaluating the strength of wood. This report describes the results of a pilot study on using these techniques to determine the quality of large dimensional lumber cut from switch ties. First, pulse echo and dynamic (transverse vibration) techniques were...

  5. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience.

    PubMed

    Paninski, L; Cunningham, J P

    2018-06-01

    Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Three-axis digital holographic microscopy for high speed volumetric imaging.

    PubMed

    Saglimbeni, F; Bianchi, S; Lepore, A; Di Leonardo, R

    2014-06-02

    Digital Holographic Microscopy allows to numerically retrieve three dimensional information encoded in a single 2D snapshot of the coherent superposition of a reference and a scattered beam. Since no mechanical scans are involved, holographic techniques have a superior performance in terms of achievable frame rates. Unfortunately, numerical reconstructions of scattered field by back-propagation leads to a poor axial resolution. Here we show that overlapping the three numerical reconstructions obtained by tilted red, green and blue beams results in a great improvement over the axial resolution and sectioning capabilities of holographic microscopy. A strong reduction in the coherent background noise is also observed when combining the volumetric reconstructions of the light fields at the three different wavelengths. We discuss the performance of our technique with two test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped E.coli bacterium.

  7. Three-dimensional nanostructure determination from a large diffraction data set recorded using scanning electron nanodiffraction

    DOE PAGES

    Meng, Yifei; Zuo, Jian -Min

    2016-07-04

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less

  8. Biodynamic profiling of three-dimensional tissue growth techniques

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Merrill, Dan; Turek, John; Nolte, David

    2016-03-01

    Three-dimensional tissue culture presents a more biologically relevant environment in which to perform drug development than conventional two-dimensional cell culture. However, obtaining high-content information from inside three dimensional tissue has presented an obstacle to rapid adoption of 3D tissue culture for pharmaceutical applications. Biodynamic imaging is a high-content three-dimensional optical imaging technology based on low-coherence interferometry and digital holography that uses intracellular dynamics as high-content image contrast. In this paper, we use biodynamic imaging to compare pharmaceutical responses to Taxol of three-dimensional multicellular spheroids grown by three different growth techniques: rotating bioreactor, hanging-drop and plate-grown spheroids. The three growth techniques have systematic variations among tissue cohesiveness and intracellular activity and consequently display different pharmacodynamics under identical drug dose conditions. The in vitro tissue cultures are also compared to ex vivo living biopsies. These results demonstrate that three-dimensional tissue cultures are not equivalent, and that drug-response studies must take into account the growth method.

  9. Simplifying the representation of complex free-energy landscapes using sketch-map

    PubMed Central

    Ceriotti, Michele; Tribello, Gareth A.; Parrinello, Michele

    2011-01-01

    A new scheme, sketch-map, for obtaining a low-dimensional representation of the region of phase space explored during an enhanced dynamics simulation is proposed. We show evidence, from an examination of the distribution of pairwise distances between frames, that some features of the free-energy surface are inherently high-dimensional. This makes dimensionality reduction problematic because the data does not satisfy the assumptions made in conventional manifold learning algorithms We therefore propose that when dimensionality reduction is performed on trajectory data one should think of the resultant embedding as a quickly sketched set of directions rather than a road map. In other words, the embedding tells one about the connectivity between states but does not provide the vectors that correspond to the slow degrees of freedom. This realization informs the development of sketch-map, which endeavors to reproduce the proximity information from the high-dimensionality description in a space of lower dimensionality even when a faithful embedding is not possible. PMID:21730167

  10. Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhbardeh, Alireza; Jacobs, Michael A.; Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205

    2012-04-15

    Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), andmore » diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.« less

  11. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    NASA Astrophysics Data System (ADS)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  12. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  13. Dimensional reduction for a SIR type model

    NASA Astrophysics Data System (ADS)

    Cahyono, Edi; Soeharyadi, Yudi; Mukhsar

    2018-03-01

    Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.

  14. Technique for comprehensive head and neck irradiation using 3-dimensional conformal proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, Mark W., E-mail: markmcdonaldmd@gmail.com; Indiana University Health Proton Therapy Center, Bloomington, IN; Walter, Alexander S.

    2015-01-01

    Owing to the technical and logistical complexities of matching photon and proton treatment modalities, we developed and implemented a technique of comprehensive head and neck radiation using 3-dimensional (3D) conformal proton therapy. A monoisocentric technique was used with a 30-cm snout. Cervical lymphatics were treated with 3 fields: a posterior-anterior field with a midline block and a right and a left posterior oblique field. The matchline of the 3 cervical nodal fields with the primary tumor site fields was staggered by 0.5 cm. Comparative intensity-modulated photon plans were later developed for 12 previously treated patients to provide equivalent target coverage,more » while matching or improving on the proton plans' sparing of organs at risk (OARs). Dosimetry to OARs was evaluated and compared by treatment modality. Comprehensive head and neck irradiation using proton therapy yielded treatment plans with significant dose avoidance of the oral cavity and midline neck structures. When compared with the generated intensity-modulated radiation therapy (IMRT) plans, the proton treatment plans yielded statistically significant reductions in the mean and integral radiation dose to the oral cavity, larynx, esophagus, and the maximally spared parotid gland. There was no significant difference in mean dose to the lesser-spared parotid gland by treatment modality or in mean or integral dose to the spared submandibular glands. A technique for cervical nodal irradiation using 3D conformal proton therapy with uniform scanning was developed and clinically implemented. Use of proton therapy for cervical nodal irradiation resulted in large volume of dose avoidance to the oral cavity and low dose exposure to midline structures of the larynx and the esophagus, with lower mean and integral dose to assessed OARs when compared with competing IMRT plans.« less

  15. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  16. ND 2 AV: N-dimensional data analysis and visualization analysis for the National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer -Timo; Maljovec, Dan; Saha, Avishek

    Here, one of the biggest challenges in high-energy physics is to analyze a complex mix of experimental and simulation data to gain new insights into the underlying physics. Currently, this analysis relies primarily on the intuition of trained experts often using nothing more sophisticated than default scatter plots. Many advanced analysis techniques are not easily accessible to scientists and not flexible enough to explore the potentially interesting hypotheses in an intuitive manner. Furthermore, results from individual techniques are often difficult to integrate, leading to a confusing patchwork of analysis snippets too cumbersome for data exploration. This paper presents a case study on how a combination of techniques from statistics, machine learning, topology, and visualization can have a significant impact in the field of inertial confinement fusion. We present themore » $$\\mathrm{ND}^2\\mathrm{AV}$$: N-dimensional data analysis and visualization framework, a user-friendly tool aimed at exploiting the intuition and current workflow of the target users. The system integrates traditional analysis approaches such as dimension reduction and clustering with state-of-the-art techniques such as neighborhood graphs and topological analysis, and custom capabilities such as defining combined metrics on the fly. All components are linked into an interactive environment that enables an intuitive exploration of a wide variety of hypotheses while relating the results to concepts familiar to the users, such as scatter plots. $$\\mathrm{ND}^2\\mathrm{AV}$$ uses a modular design providing easy extensibility and customization for different applications. $$\\mathrm{ND}^2\\mathrm{AV}$$ is being actively used in the National Ignition Campaign and has already led to a number of unexpected discoveries.« less

  17. ND 2 AV: N-dimensional data analysis and visualization analysis for the National Ignition Campaign

    DOE PAGES

    Bremer, Peer -Timo; Maljovec, Dan; Saha, Avishek; ...

    2015-07-01

    Here, one of the biggest challenges in high-energy physics is to analyze a complex mix of experimental and simulation data to gain new insights into the underlying physics. Currently, this analysis relies primarily on the intuition of trained experts often using nothing more sophisticated than default scatter plots. Many advanced analysis techniques are not easily accessible to scientists and not flexible enough to explore the potentially interesting hypotheses in an intuitive manner. Furthermore, results from individual techniques are often difficult to integrate, leading to a confusing patchwork of analysis snippets too cumbersome for data exploration. This paper presents a case study on how a combination of techniques from statistics, machine learning, topology, and visualization can have a significant impact in the field of inertial confinement fusion. We present themore » $$\\mathrm{ND}^2\\mathrm{AV}$$: N-dimensional data analysis and visualization framework, a user-friendly tool aimed at exploiting the intuition and current workflow of the target users. The system integrates traditional analysis approaches such as dimension reduction and clustering with state-of-the-art techniques such as neighborhood graphs and topological analysis, and custom capabilities such as defining combined metrics on the fly. All components are linked into an interactive environment that enables an intuitive exploration of a wide variety of hypotheses while relating the results to concepts familiar to the users, such as scatter plots. $$\\mathrm{ND}^2\\mathrm{AV}$$ uses a modular design providing easy extensibility and customization for different applications. $$\\mathrm{ND}^2\\mathrm{AV}$$ is being actively used in the National Ignition Campaign and has already led to a number of unexpected discoveries.« less

  18. Multifunctional, three-dimensional tomography for analysis of eletrectrohydrodynamic jetting

    NASA Astrophysics Data System (ADS)

    Nguyen, Xuan Hung; Gim, Yeonghyeon; Ko, Han Seo

    2015-05-01

    A three-dimensional optical tomography technique was developed to reconstruct three-dimensional objects using a set of two-dimensional shadowgraphic images and normal gray images. From three high-speed cameras, which were positioned at an offset angle of 45° between each other, number, size, and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside a cone-shaped liquid (Taylor cone) induced under an electric field was observed using a simultaneous multiplicative algebraic reconstruction technique (SMART), a tomographic method for reconstructing light intensities of particles, combined with three-dimensional cross-correlation. Various velocity fields of circulating flows inside the cone-shaped liquid caused by various physico-chemical properties of liquid were also investigated.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Yifei; Zuo, Jian -Min

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less

  20. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    PubMed Central

    Zenil, Hector; Kiani, Narsis A.; Ball, Gordon; Gomez-Cabrero, David

    2016-01-01

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698038

  1. Periodic orbit analysis of a system with continuous symmetry—A tutorial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budanur, Nazmi Burak, E-mail: budanur3@gatech.edu; Cvitanović, Predrag; Borrero-Echeverry, Daniel

    2015-07-15

    Dynamical systems with translational or rotational symmetry arise frequently in studies of spatially extended physical systems, such as Navier-Stokes flows on periodic domains. In these cases, it is natural to express the state of the fluid in terms of a Fourier series truncated to a finite number of modes. Here, we study a 4-dimensional model with chaotic dynamics and SO(2) symmetry similar to those that appear in fluid dynamics problems. A crucial step in the analysis of such a system is symmetry reduction. We use the model to illustrate different symmetry-reduction techniques. The system's relative equilibria are conveniently determined bymore » rewriting the dynamics in terms of a symmetry-invariant polynomial basis. However, for the analysis of its chaotic dynamics, the “method of slices,” which is applicable to very high-dimensional problems, is preferable. We show that a Poincaré section taken on the 'slice' can be used to further reduce this flow to what is for all practical purposes a unimodal map. This enables us to systematically determine all relative periodic orbits and their symbolic dynamics up to any desired period. We then present cycle averaging formulas adequate for systems with continuous symmetry and use them to compute dynamical averages using relative periodic orbits. The convergence of such computations is discussed.« less

  2. Linear and nonlinear subspace analysis of hand movements during grasping.

    PubMed

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  3. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    NASA Astrophysics Data System (ADS)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  4. [Application of three-dimensional printing technique in orthopaedics].

    PubMed

    Luo, Qiang; Lau, Tak Wing; Fang, Xinshuo; Leung, Frankie

    2014-03-01

    To review the current progress of three-dimensional (3-D) printing technique in the clinical practice, its limitations and prospects. The recent publications associated with the clinical application of 3-D printing technique in the field of surgery, especially in orthopaedics were extensively reviewed. Currently, 3-D printing technique has been applied in orthopaedic surgery to aid diagnosis, make operative plans, and produce personalized prosthesis or implants. 3-D printing technique is a promising technique in clinical application.

  5. Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.

    PubMed

    Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E

    2017-01-01

    Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p < .0001) on the final examination quantitative titration problem than those who used the typical technique (10.9 of 20 points). American Chemical Society examination scores and in-house assessment indicate that better performing beginning chemistry students were more likely to use the variation technique rather than the typical technique. The variation technique may be useful as an alternative instructional approach to enhance beginning chemistry students' mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.

  6. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  7. Cornea and ocular lens visualized with three-dimensional confocal microscopy

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1992-08-01

    This paper demonstrates the advantages of three-dimensional reconstruction of the cornea and the ocular crystalline lens by confocal microscopy and volume rendering computer techniques. The advantages of noninvasive observation of ocular structures in living, unstained, unfixed tissue include the following: the tissue is in a natural living state without the artifacts of fixation, mechanical sectioning, and staining; the three-dimensional structure can be observed from any view point and quantitatively analyzed; the dynamics of morphological changes can be studied; and the use of confocal microscopic observation results in a reduction of the number of animals required for ocular morphometric studies. The main advantage is that the dynamic morphology of ocular structures can be investigated in living ocular tissue. A laser scanning confocal microscope was used in the reflected light mode to obtain the two- dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with 488 nm wavelength. The microscope objective was a Leitz 25X, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133, three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The under sampling resulted in a three-dimensional visualization rendering in which the corneal thickness (z-axis) is compressed. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their `beaded' cell borders, basal lamina, nerve plexus, nerve fibers, free nerve endings in the basal epithelial cells, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in-situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers.

  8. Three-Dimensional FIB/EBSD Characterization of Irradiated HfAl3-Al Composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, Zilong; Guillen, Donna Post; Harris, William

    2016-09-01

    A thermal neutron absorbing material, comprised of 28.4 vol% HfAl3 in an Al matrix, was developed to serve as a conductively cooled thermal neutron filter to enable fast flux materials and fuels testing in a pressurized water reactor. In order to observe the microstructural change of the HfAl3-Al composite due to neutron irradiation, an EBSD-FIB characterization approach is developed and presented in this paper. Using the focused ion beam (FIB), the sample was fabricated to 25µm × 25µm × 20 µm and mounted on the grid. A series of operations were carried out repetitively on the sample top surface tomore » prepare it for scanning electron microscopy (SEM). First, a ~100-nm layer was removed by high voltage FIB milling. Then, several cleaning passes were performed on the newly exposed surface using low voltage FIB milling to improve the SEM image quality. Last, the surface was scanned by Electron Backscattering Diffraction (EBSD) to obtain the two-dimensional image. After 50 to 100 two-dimensional images were collected, the images were stacked to reconstruct a three-dimensional model using DREAM.3D software. Two such reconstructed three-dimensional models were obtained from samples of the original and post-irradiation HfAl3-Al composite respectively, from which the most significant microstructural change caused by neutron irradiation apparently is the size reduction of both HfAl3 and Al grains. The possible reason is the thermal expansion and related thermal strain from the thermal neutron absorption. This technique can be applied to three-dimensional microstructure characterization of irradiated materials.« less

  9. Discrimination of stroke-related mild cognitive impairment and vascular dementia using EEG signal analysis.

    PubMed

    Al-Qazzaz, Noor Kamal; Ali, Sawal Hamid Bin Mohd; Ahmad, Siti Anom; Islam, Mohd Shabiul; Escudero, Javier

    2018-01-01

    Stroke survivors are more prone to developing cognitive impairment and dementia. Dementia detection is a challenge for supporting personalized healthcare. This study analyzes the electroencephalogram (EEG) background activity of 5 vascular dementia (VaD) patients, 15 stroke-related patients with mild cognitive impairment (MCI), and 15 control healthy subjects during a working memory (WM) task. The objective of this study is twofold. First, it aims to enhance the discrimination of VaD, stroke-related MCI patients, and control subjects using fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR); second, it aims to extract and investigate the spectral features that characterize the post-stroke dementia patients compared to the control subjects. Nineteen channels were recorded and analyzed using the independent component analysis and wavelet analysis (ICA-WT) denoising technique. Using ANOVA, linear spectral power including relative powers (RP) and power ratio were calculated to test whether the EEG dominant frequencies were slowed down in VaD and stroke-related MCI patients. Non-linear features including permutation entropy (PerEn) and fractal dimension (FD) were used to test the degree of irregularity and complexity, which was significantly lower in patients with VaD and stroke-related MCI than that in control subjects (ANOVA; p ˂ 0.05). This study is the first to use fuzzy neighborhood preserving analysis with QR-decomposition (FNPAQR) dimensionality reduction technique with EEG background activity of dementia patients. The impairment of post-stroke patients was detected using support vector machine (SVM) and k-nearest neighbors (kNN) classifiers. A comparative study has been performed to check the effectiveness of using FNPAQR dimensionality reduction technique with the SVM and kNN classifiers. FNPAQR with SVM and kNN obtained 91.48 and 89.63% accuracy, respectively, whereas without using the FNPAQR exhibited 70 and 67.78% accuracy for SVM and kNN, respectively, in classifying VaD, stroke-related MCI, and control patients, respectively. Therefore, EEG could be a reliable index for inspecting concise markers that are sensitive to VaD and stroke-related MCI patients compared to control healthy subjects.

  10. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Vandoormaal, J. P.; Turan, A.; Raithby, G. D.

    1986-01-01

    The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.

  11. Near-field three-dimensional radar imaging techniques and applications.

    PubMed

    Sheen, David; McMakin, Douglas; Hall, Thomas

    2010-07-01

    Three-dimensional radio frequency imaging techniques have been developed for a variety of near-field applications, including radar cross-section imaging, concealed weapon detection, ground penetrating radar imaging, through-barrier imaging, and nondestructive evaluation. These methods employ active radar transceivers that operate at various frequency ranges covering a wide range, from less than 100 MHz to in excess of 350 GHz, with the frequency range customized for each application. Computational wavefront reconstruction imaging techniques have been developed that optimize the resolution and illumination quality of the images. In this paper, rectilinear and cylindrical three-dimensional imaging techniques are described along with several application results.

  12. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  13. Dimensionality reduction in epidemic spreading models

    NASA Astrophysics Data System (ADS)

    Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.

    2015-09-01

    Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.

  14. Score-level fusion of two-dimensional and three-dimensional palmprint for personal recognition systems

    NASA Astrophysics Data System (ADS)

    Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab

    2017-01-01

    Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.

  15. The effects of cold rolling and the subsequent heat treatments on the shape memory and the superelasticity characteristics of Cu73Al16Mn11 shape memory alloy

    NASA Astrophysics Data System (ADS)

    Babacan, N.; Ma, J.; Turkbas, O. S.; Karaman, I.; Kockar, B.

    2018-01-01

    In the present study, the effect of thermo-mechanical treatments on the shape memory and the superelastic characteristics of Cu73Al16Mn11 (at%) shape memory alloy were investigated. 10%, 50% and 70% cold rolling and subsequent heat treatment processes were conducted to achieve strengthening via grain size refinement. 70% grain size reduction compared to the homogenized condition was obtained using 70% cold rolling and subsequent recrystallization heat treatment technique. Moreover, 10% cold rolling was applied to homogenized specimen to reveal the influence of the low percentage cold rolling reduction with no heat treatment on shape memory properties of Cu73Al16Mn11 (at%) alloy. Stress free transformation temperatures, monotonic tension and superelasticity behaviors of these samples were compared with those of the as-aged sample. Isobaric heating-cooling experiments were also conducted to see the dimensional stability of the samples as a function of applied stress. The 70% grain-refined sample exhibited better dimensional stability showing reduced residual strain levels upon thermal cycling under constant stress compared with the as-aged material. However, no improvement was achieved with grain size reduction in the superelasticity experiments. This distinctive observation was attributed to the difference in the magnitude of the stress levels achieved during two different types of experiments which were the isobaric heating-cooling and superelasticity tests. Intergranular fracture due to the stress concentration overcame the strengthening effect via grain refinement in the superelasticity tests at higher stress values. On the other hand, the strength of the material and resistance of material against plastic deformation upon phase transformation were increased as a result of the grain refinement at lower stress values in the isobaric heating-cooling experiments.

  16. Intra-operative 3D imaging system for robot-assisted fracture manipulation.

    PubMed

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-01-01

    Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).

  17. High-density force myography: A possible alternative for upper-limb prosthetic control.

    PubMed

    Radmand, Ashkan; Scheme, Erik; Englehart, Kevin

    2016-01-01

    Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.

  18. Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*

    NASA Astrophysics Data System (ADS)

    Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.

    2006-04-01

    The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.

  19. A Quantitative Technique for Beginning Microscopists.

    ERIC Educational Resources Information Center

    Sundberg, Marshall D.

    1984-01-01

    Stereology is the study of three-dimensional objects through the interpretation of two-dimensional images. Stereological techniques used in introductory botany to quantitatively examine changes in leaf anatomy in response to different environments are discussed. (JN)

  20. Chemical space visualization: transforming multidimensional chemical spaces into similarity-based molecular networks.

    PubMed

    de la Vega de León, Antonio; Bajorath, Jürgen

    2016-09-01

    The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.

  1. A technique for the reduction of banding in Landsat Thematic Mapper Images

    USGS Publications Warehouse

    Helder, Dennis L.; Quirk, Bruce K.; Hood, Joy J.

    1992-01-01

    The radiometric difference between forward and reverse scans in Landsat thematic mapper (TM) images, referred to as "banding," can create problems when enhancing the image for interpretation or when performing quantitative studies. Recent research has led to the development of a method that reduces the banding in Landsat TM data sets. It involves passing a one-dimensional spatial kernel over the data set. This kernel is developed from the statistics of the banding pattern and is based on the Wiener filter. It has been implemented on both a DOS-based microcomputer and several UNIX-based computer systems. The algorithm has successfully reduced the banding in several test data sets.

  2. Improving Spectral Results Using Row-by-Row Fourier Transform of Spatial Heterodyne Raman Spectrometer Interferogram.

    PubMed

    Barnett, Patrick D; Strange, K Alicia; Angel, S Michael

    2017-06-01

    This work describes a method of applying the Fourier transform to the two-dimensional Fizeau fringe patterns generated by the spatial heterodyne Raman spectrometer (SHRS), a dispersive interferometer, to correct the effects of certain types of optical alignment errors. In the SHRS, certain types of optical misalignments result in wavelength-dependent and wavelength-independent rotations of the fringe pattern on the detector. We describe here a simple correction technique that can be used in post-processing, by applying the Fourier transform in a row-by-row manner. This allows the user to be more forgiving of fringe alignment and allows for a reduction in the mechanical complexity of the SHRS.

  3. Distributed augmented reality with 3-D lung dynamics--a planning tool concept.

    PubMed

    Hamza-Lup, Felix G; Santhanam, Anand P; Imielińska, Celina; Meeks, Sanford L; Rolland, Jannick P

    2007-01-01

    Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patient's body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype.

  4. Damage monitoring of aircraft structures made of composite materials using wavelet transforms

    NASA Astrophysics Data System (ADS)

    Molchanov, D.; Safin, A.; Luhyna, N.

    2016-10-01

    The present article is dedicated to the study of the acoustic properties of composite materials and the application of non-destructive testing methods to aircraft components. A mathematical model of a wavelet transformed signal is presented. The main acoustic (vibration) properties of different composite material structures were researched. Multiple vibration parameter dependencies on the noise reduction factor were derived. The main steps of a research procedure and new method algorithm are presented. The data obtained was compared with the data from a three dimensional laser-Doppler scanning vibrometer, to validate the results. The new technique was tested in the laboratory and on civil aircraft at a training airfield.

  5. Dust ion-acoustic shock waves in magnetized pair-ion plasma with kappa distributed electrons

    NASA Astrophysics Data System (ADS)

    Kaur, B.; Singh, M.; Saini, N. S.

    2018-01-01

    We have performed a theoretical and numerical analysis of the three dimensional dynamics of nonlinear dust ion-acoustic shock waves (DIASWs) in a magnetized plasma, consisting of positive and negative ion fluids, kappa distributed electrons, immobile dust particulates along with positive and negative ion kinematic viscosity. By employing the reductive perturbation technique, we have derived the nonlinear Zakharov-Kuznetsov-Burgers (ZKB) equation, in which the nonlinear forces are balanced by dissipative forces (associated with kinematic viscosity). It is observed that the characteristics of DIASWs are significantly affected by superthermality of electrons, magnetic field strength, direction cosines, dust concentration, positive to negative ions mass ratio and viscosity of positive and negative ions.

  6. Method for making a bio-compatible scaffold

    DOEpatents

    Cesarano, III, Joseph; Stuecker, John N [Albuquerque, NM; Dellinger, Jennifer G [Champaigne, IL; Jamison, Russell D [Urbana, IL

    2006-01-31

    A method for forming a three-dimensional, biocompatible, porous scaffold structure using a solid freeform fabrication technique (referred to herein as robocasting) that can be used as a medical implant into a living organism, such as a human or other mammal. Imaging technology and analysis is first used to determine the three-dimensional design required for the medical implant, such as a bone implant or graft, fashioned as a three-dimensional, biocompatible scaffold structure. The robocasting technique is used to either directly produce the three-dimensional, porous scaffold structure or to produce an over-sized three-dimensional, porous scaffold lattice which can be machined to produce the designed three-dimensional, porous scaffold structure for implantation.

  7. FeynArts model file for MSSM transition counterterms from DREG to DRED

    NASA Astrophysics Data System (ADS)

    Stöckinger, Dominik; Varšo, Philipp

    2012-02-01

    The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.

  8. Optimizing transformations of stencil operations for parallel cache-based architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassetti, F.; Davis, K.

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less

  9. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  10. Polymer-directed crystallization of atorvastatin.

    PubMed

    Choi, Hyemin; Lee, Hyeseung; Lee, Min Kyung; Lee, Jonghwi

    2012-08-01

    Living organisms secrete minerals composed of peptides and proteins, resulting in "mesocrystals" of three-dimensional-assembled composite structures. Recently, this biomimetic polymer-directed crystallization technique has been widely applied to inorganic materials, although it has seldom been used with drugs. In this study, the technique was applied to the drowning-out crystallization of atorvastatin using various polymers. Nucleation and growth at optimized conditions successfully produced composite crystals with significant polymer contents and unusual characteristics. Atorvastatin composite crystals containing polyethylene glycol, polyacrylic acid, polyethylene imine, and chitosan showed a markedly decreased melting point and heat of fusion, improved stability, and sustained-release patterns. The use of hydroxypropyl cellulose yielded a unique combination of enhanced in vitro release and improved drug stability under a forced degradation condition. The formation hypothesis of unique mesocrystal structures was strongly supported by an X-ray diffraction pattern and substantial melting point reduction. This polymer-directed crystallization technique offers a novel and effective way, different from the solid dispersion approach, to engineer the release, stability, and processability of drug crystals. Copyright © 2012 Wiley Periodicals, Inc.

  11. On the nonexistence of degenerate phase-shift discrete solitons in a dNLS nonlocal lattice

    NASA Astrophysics Data System (ADS)

    Penati, T.; Sansottera, M.; Paleari, S.; Koukouloyannis, V.; Kevrekidis, P. G.

    2018-05-01

    We consider a one-dimensional discrete nonlinear Schrödinger (dNLS) model featuring interactions beyond nearest neighbors. We are interested in the existence (or nonexistence) of phase-shift discrete solitons, which correspond to four-site vortex solutions in the standard two-dimensional dNLS model (square lattice), of which this is a simpler variant. Due to the specific choice of lengths of the inter-site interactions, the vortex configurations considered present a degeneracy which causes the standard continuation techniques to be non-applicable. In the present one-dimensional case, the existence of a conserved quantity for the soliton profile (the so-called density current), together with a perturbative construction, leads to the nonexistence of any phase-shift discrete soliton which is at least C2 with respect to the small coupling ɛ, in the limit of vanishing ɛ. If we assume the solution to be only C0 in the same limit of ɛ, nonexistence is instead proved by studying the bifurcation equation of a Lyapunov-Schmidt reduction, expanded to suitably high orders. Specifically, we produce a nonexistence criterion whose efficiency we reveal in the cases of partial and full degeneracy of approximate solutions obtained via a leading order expansion.

  12. Reduction of conductance mismatch in Fe/Al2O3/MoS2 system by tunneling-barrier thickness control

    NASA Astrophysics Data System (ADS)

    Hayakawa, Naoki; Muneta, Iriya; Ohashi, Takumi; Matsuura, Kentaro; Shimizu, Jun’ichi; Kakushima, Kuniyuki; Tsutsui, Kazuo; Wakabayashi, Hitoshi

    2018-04-01

    Molybdenum disulfide (MoS2) among two-dimensional semiconductor films is promising for spintronic devices because it has a longer spin-relaxation time with contrasting spin splitting than silicon. However, it is difficult to fabricate integrated circuits by the widely used exfoliation method. Here, we investigate the contact characteristics in the Fe/Al2O3/sputtered-MoS2 system with various thicknesses of the Al2O3 film. Current density increases with increasing thickness up to 2.5 nm because of both thermally-assisted and direct tunneling currents. On the other hand, it decreases with increasing thickness over 2.5 nm limited by direct tunneling currents. These results suggest that the Schottky barrier width can be controlled by changing thicknesses of the Al2O3 film, as supported by calculations. The reduction of conductance mismatch with this technique can lead to highly efficient spin injection from iron into the MoS2 film.

  13. Manufacture of astroloy turbine disk shapes by hot isostatic pressing, volume 1

    NASA Technical Reports Server (NTRS)

    Eng, R. D.; Evans, D. J.

    1978-01-01

    The Materials in Advanced Turbine Engines project was conducted to demonstrate container technology and establish manufacturing procedures for fabricating direct Hot Isostatic Pressing (HIP) of low carbon Astroloy to ultrasonic disk shapes. The HIP processing procedures including powder manufacture and handling, container design and fabrication, and HIP consolidation techniques were established by manufacturing five HIP disks. Based upon dimensional analysis of the first three disks, container technology was refined by modifying container tooling which resulted in closer conformity of the HIP surfaces to the sonic shape. The microstructure, chemistry and mechanical properties of two HIP low carbon Astroloy disks were characterized. One disk was subjected to a ground base experimental engine test, and the results of HIP low carbon Astroloy were analyzed and compared to conventionally forged Waspaloy. The mechanical properties of direct HIP low carbon Astroloy exceeded all property goals and the objectives of reduction in material input weight and reduction in cost were achieved.

  14. Finite element analysis of the end notched flexure specimen for measuring Mode II fracture toughness

    NASA Technical Reports Server (NTRS)

    Gillespie, J. W., Jr.; Carlsson, L. A.; Pipes, R. B.

    1986-01-01

    The paper presents a finite element analysis of the end-notched flexure (ENF) test specimen for Mode II interlaminar fracture testing of composite materials. Virtual crack closure and compliance techniques employed to calculate strain energy release rates from linear elastic two-dimensional analysis indicate that the ENF specimen is a pure Mode II fracture test within the constraints of small deflection theory. Furthermore, the ENF fracture specimen is shown to be relatively insensitive to process-induced cracks, offset from the laminate midplane. Frictional effects are investigated by including the contact problem in the finite element model. A parametric study investigating the influence of delamination length, span, thickness, and material properties assessed the accuracy of beam theory expressions for compliance and strain energy release rate, GII. Finite element results indicate that data reduction schemes based upon beam theory underestimate GII by approximately 20-40 percent for typical unidirectional graphite fiber composite test specimen geometries. Consequently, an improved data reduction scheme is proposed.

  15. Corrective Osteotomy for Symptomatic Clavicle Malunion Using Patient-specific Osteotomy and Reduction Guides.

    PubMed

    Haefeli, Mathias; Schenkel, Matthias; Schumacher, Ralf; Eid, Karim

    2017-09-01

    Midshaft clavicular fractures are often treated nonoperatively with good reported clinical outcome in a majority of patients. However, malunion with shortening of the affected clavicle is not uncommon. Shortening of the clavicle has been shown to affect shoulder strength and kinematics with alteration of scapular position. Whereas the exact clinical impact of these factors is unknown, the deformity may lead to cosmetic and functional impairment as for example pain with weight-bearing on the shoulder girdle. Other reported complications of clavicular malunion include thoracic outlet syndrome, subclavicular vein thrombosis, and axillary plexus compression. Corrective osteotomy has therefore been recommended for symptomatic clavicular malunions, generally using plain x-rays for planning the necessary elongation. Particularly in malunited multifragmentary fractures it may be difficult to exactly determine the plane of osteotomy intraoperatively to restore the precise anatomic shape of the clavicle. We present a technique for corrective osteotomy using preoperative computer planning and 3-dimensional printed patient-specific intraoperative osteotomy and reduction guides based on the healthy contralateral clavicle.

  16. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes

    NASA Astrophysics Data System (ADS)

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-04-01

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process.

  17. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes

    PubMed Central

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-01-01

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process. PMID:24751587

  18. Biomimicry of multifunctional nanostructures in the neck feathers of mallard (Anas platyrhynchos L.) drakes.

    PubMed

    Khudiyev, Tural; Dogan, Tamer; Bayindir, Mehmet

    2014-04-22

    Biological systems serve as fundamental sources of inspiration for the development of artificially colored devices, and their investigation provides a great number of photonic design opportunities. While several successful biomimetic designs have been detailed in the literature, conventional fabrication techniques nonetheless remain inferior to their natural counterparts in complexity, ease of production and material economy. Here, we investigate the iridescent neck feathers of Anas platyrhynchos drakes, show that they feature an unusual arrangement of two-dimensional (2D) photonic crystals and further exhibit a superhydrophobic surface, and mimic this multifunctional structure using a nanostructure composite fabricated by a recently developed top-down iterative size reduction method, which avoids the above-mentioned fabrication challenges, provides macroscale control and enhances hydrophobicity through the surface structure. Our 2D solid core photonic crystal fibres strongly resemble drake neck plumage in structure and fully polymeric material composition, and can be produced in wide array of colors by minor alterations during the size reduction process.

  19. Acoustic scattering reduction using layers of elastic materials

    NASA Astrophysics Data System (ADS)

    Dutrion, Cécile; Simon, Frank

    2017-02-01

    Making an object invisible to acoustic waves could prove useful for military applications or measurements in confined space. Different passive methods have been proposed in recent years to avoid acoustic scattering from rigid obstacles. These techniques are exclusively based on acoustic phenomena, and use for instance multiple resonators or scatterers. This paper examines the possibility of designing an acoustic cloak using a bi-layer elastic cylindrical shell to eliminate the acoustic field scattered from a rigid cylinder hit by plane waves. This field depends on the dimensional and mechanical characteristics of the elastic layers. It is computed by a semi-analytical code modelling the vibrations of the coating under plane wave excitation. Optimization by genetic algorithm is performed to determine the characteristics of a bi-layer material minimizing the scattering. Considering an external fluid consisting of air, realistic configurations of elastic coatings emerge, composed of a thick internal orthotopic layer and a thin external isotropic layer. These coatings are shown to enable scattering reduction at a precise frequency or over a larger frequency band.

  20. Calibration of a thin metal foil for infrared imaging video bolometer to estimate the spatial variation of thermal diffusivity using a photo-thermal technique.

    PubMed

    Pandya, Shwetang N; Peterson, Byron J; Sano, Ryuichi; Mukai, Kiyofumi; Drapiko, Evgeny A; Alekseyev, Andrey G; Akiyama, Tsuyoshi; Itomi, Muneji; Watanabe, Takashi

    2014-05-01

    A thin metal foil is used as a broad band radiation absorber for the InfraRed imaging Video Bolometer (IRVB), which is a vital diagnostic for studying three-dimensional radiation structures from high temperature plasmas in the Large Helical Device. The two-dimensional (2D) heat diffusion equation of the foil needs to be solved numerically to estimate the radiation falling on the foil through a pinhole geometry. The thermal, physical, and optical properties of the metal foil are among the inputs to the code besides the spatiotemporal variation of temperature, for reliable estimation of the exhaust power from the plasma illuminating the foil. The foil being very thin and of considerable size, non-uniformities in these properties need to be determined by suitable calibration procedures. The graphite spray used for increasing the surface emissivity also contributes to a change in the thermal properties. This paper discusses the application of the thermographic technique for determining the spatial variation of the effective in-plane thermal diffusivity of the thin metal foil and graphite composite. The paper also discusses the advantages of this technique in the light of limitations and drawbacks presented by other calibration techniques being practiced currently. The technique is initially applied to a material of known thickness and thermal properties for validation and finally to thin foils of gold and platinum both with two different thicknesses. It is observed that the effect of the graphite layer on the estimation of the thermal diffusivity becomes more pronounced for thinner foils and the measured values are approximately 2.5-3 times lower than the literature values. It is also observed that the percentage reduction in thermal diffusivity due to the coating is lower for high thermal diffusivity materials such as gold. This fact may also explain, albeit partially, the higher sensitivity of the platinum foil as compared to gold.

  1. Calibration of a thin metal foil for infrared imaging video bolometer to estimate the spatial variation of thermal diffusivity using a photo-thermal technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Shwetang N., E-mail: pandya.shwetang@LHD.nifs.ac.jp; Sano, Ryuichi; Peterson, Byron J.

    A thin metal foil is used as a broad band radiation absorber for the InfraRed imaging Video Bolometer (IRVB), which is a vital diagnostic for studying three-dimensional radiation structures from high temperature plasmas in the Large Helical Device. The two-dimensional (2D) heat diffusion equation of the foil needs to be solved numerically to estimate the radiation falling on the foil through a pinhole geometry. The thermal, physical, and optical properties of the metal foil are among the inputs to the code besides the spatiotemporal variation of temperature, for reliable estimation of the exhaust power from the plasma illuminating the foil.more » The foil being very thin and of considerable size, non-uniformities in these properties need to be determined by suitable calibration procedures. The graphite spray used for increasing the surface emissivity also contributes to a change in the thermal properties. This paper discusses the application of the thermographic technique for determining the spatial variation of the effective in-plane thermal diffusivity of the thin metal foil and graphite composite. The paper also discusses the advantages of this technique in the light of limitations and drawbacks presented by other calibration techniques being practiced currently. The technique is initially applied to a material of known thickness and thermal properties for validation and finally to thin foils of gold and platinum both with two different thicknesses. It is observed that the effect of the graphite layer on the estimation of the thermal diffusivity becomes more pronounced for thinner foils and the measured values are approximately 2.5–3 times lower than the literature values. It is also observed that the percentage reduction in thermal diffusivity due to the coating is lower for high thermal diffusivity materials such as gold. This fact may also explain, albeit partially, the higher sensitivity of the platinum foil as compared to gold.« less

  2. Drag and heat flux reduction mechanism of blunted cone with aerodisks

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Li, Lang-quan; Yan, Li; Zhang, Tian-tian

    2017-09-01

    The major challenge among a number of design requirements for hypersonic vehicles is the reduction of drag and aerodynamic heating. Of all these techniques of drag and heat flux reduction, application of forward facing aerospike conceived in 1950s is an effective and simpler technique to reduce the drag as well as the heat transfer rate for blunt nosed bodies at hypersonic Mach numbers. In this paper, the flow fields around a blunt cone with and without aerodisk flying at hypersonic Mach numbers are computed numerically, and the numerical simulations are conducted by specifying the freestream velocity, static pressure and static temperatures at the inlet of the computational domain with a three-dimensional, steady, Reynolds-averaged Navier-Stokes equation. An aerodisk is attached to the tip of the rod to reduce the drag and heat flux further. The influences of the length of rod and the diameter of aerodisk on the drag and heat flux reduction mechanism are analyzed comprehensively, and eight configurations are taken into consideration in the current study. The obtained results show that for all aerodisks, the reduction in drag of the blunt body is proportional to the extent of the recirculation dead air region. For long rods, the aerodisk is found not that beneficial in reducing the drag, and an aerodisk is more effective than an aerospike. The spike produces a region of recirculation separated flow that shields the blunt-nosed body from the incoming flow, and the recirculation region is formed around the root of the spike up to the reattachment point of the flow at the shoulder of the blunt body. The dynamic pressure in the recirculation area is highly reduced and thus leads to the decrease in drag and heat load on the surface of the blunt body. Because of the reattachment of the shear layer on the shoulder of the blunt body, the pressure near that point becomes large.

  3. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  4. Upon Generating (2+1)-dimensional Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Bai, Yang; Wu, Lixin

    2016-06-01

    Under the framework of the Adler-Gel'fand-Dikii(AGD) scheme, we first propose two Hamiltonian operator pairs over a noncommutative ring so that we construct a new dynamical system in 2+1 dimensions, then we get a generalized special Novikov-Veselov (NV) equation via the Manakov triple. Then with the aid of a special symmetric Lie algebra of a reductive homogeneous group G, we adopt the Tu-Andrushkiw-Huang (TAH) scheme to generate a new integrable (2+1)-dimensional dynamical system and its Hamiltonian structure, which can reduce to the well-known (2+1)-dimensional Davey-Stewartson (DS) hierarchy. Finally, we extend the binormial residue representation (briefly BRR) scheme to the super higher dimensional integrable hierarchies with the help of a super subalgebra of the super Lie algebra sl(2/1), which is also a kind of symmetric Lie algebra of the reductive homogeneous group G. As applications, we obtain a super 2+1 dimensional MKdV hierarchy which can be reduced to a super 2+1 dimensional generalized AKNS equation. Finally, we compare the advantages and the shortcomings for the three schemes to generate integrable dynamical systems.

  5. Experimental verification of a 4D MLEM reconstruction algorithm used for in-beam PET measurements in particle therapy

    NASA Astrophysics Data System (ADS)

    Stützer, K.; Bert, C.; Enghardt, W.; Helmbrecht, S.; Parodi, K.; Priegnitz, M.; Saito, N.; Fiedler, F.

    2013-08-01

    In-beam positron emission tomography (PET) has been proven to be a reliable technique in ion beam radiotherapy for the in situ and non-invasive evaluation of the correct dose deposition in static tumour entities. In the presence of intra-fractional target motion an appropriate time-resolved (four-dimensional, 4D) reconstruction algorithm has to be used to avoid reconstructed activity distributions suffering from motion-related blurring artefacts and to allow for a dedicated dose monitoring. Four-dimensional reconstruction algorithms from diagnostic PET imaging that can properly handle the typically low counting statistics of in-beam PET data have been adapted and optimized for the characteristics of the double-head PET scanner BASTEI installed at GSI Helmholtzzentrum Darmstadt, Germany (GSI). Systematic investigations with moving radioactive sources demonstrate the more effective reduction of motion artefacts by applying a 4D maximum likelihood expectation maximization (MLEM) algorithm instead of the retrospective co-registration of phasewise reconstructed quasi-static activity distributions. Further 4D MLEM results are presented from in-beam PET measurements of irradiated moving phantoms which verify the accessibility of relevant parameters for the dose monitoring of intra-fractionally moving targets. From in-beam PET listmode data sets acquired together with a motion surrogate signal, valuable images can be generated by the 4D MLEM reconstruction for different motion patterns and motion-compensated beam delivery techniques.

  6. Support vector machines for nuclear reactor state estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less

  7. Microbial fuel cells coupling with the three-dimensional electro-Fenton technique enhances the degradation of methyl orange in the wastewater.

    PubMed

    Huang, Tao; Liu, Longfei; Tao, Junjun; Zhou, Lulu; Zhang, Shuwen

    2018-04-23

    The emission of the source effluent of azo dyes has resulted in a serial of environmental problems including of the direct damage of the natural esthetics, the inhibition of the oxygen exchange, the shortage of the photosynthesis, and the reduction of the aquatic flora and fauna. A bioelectrochemical platform (3D-EF-MFCs) combining two-chamber microbial fuel cells and three dimensional electro-Fenton technique were delicately designed and assembled to explore the decolorization, bio-genericity performance of the methyl orange, and the possible biotic-abiotic degradation mechanisms. The 3D-EF-MFCs processes showed higher decolorization efficiencies, COD removals, and better bioelectricity performance than the pure electro-Fenton-microbial fuel cell (EF-MFC) systems. The two-chamber experiments filling with the granular activated carbons were better than the single-chamber packing system on the whole. The moderate increase of Fe 2+ ions dosing in the cathode chamber accelerated the formation of •OH, which further enhanced the degradation of the methyl orange (MO). The cathode-decolorization and COD removals were decreased with the increase of MO concentration. However, the degradation performance of MO was indirectly improved in the anode compartment at the same conditions. The bed electrodes played a mediator role in the anode and cathode chambers, certainly elevated the voltage output and the power density, and lowered the internal impedance of EF-MFC process.

  8. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  9. Tissue ablation after 120W greenlight laser vaporization and bipolar plasma vaporization of the prostate: a comparison using transrectal three-dimensional ultrasound volumetry

    NASA Astrophysics Data System (ADS)

    Kranzbühler, Benedikt; Gross, Oliver; Fankhauser, Christian D.; Hefermehl, Lukas J.; Poyet, Cédric; Largo, Remo; Müntener, Michael; Seifert, Hans-Helge; Zimmermann, Matthias; Sulser, Tullio; Müller, Alexander; Hermanns, Thomas

    2012-02-01

    Introduction and objectives: Greenlight laser vaporization (LV) of the prostate is characterized by simultaneous vaporization and coagulation of prostatic tissue resulting in tissue ablation together with excellent hemostasis during the procedure. It has been reported that bipolar plasma vaporization (BPV) of the prostate might be an alternative for LV. So far, it has not been shown that BPV is as effective as LV in terms of tissue ablation or hemostasis. We performed transrectal three-dimensional ultrasound investigations to compare the efficiency of tissue ablation between LV and BPV. Methods: Between 11.2009 and 5.2011, 50 patients underwent pure BPV in our institution. These patients were matched with regard to the pre-operative prostate volume to 50 LV patients from our existing 3D-volumetry-database. Transrectal 3D ultrasound and planimetric volumetry of the prostate were performed pre-operatively, after catheter removal, 6 weeks and 6 months. Results: Median pre-operative prostate volume was not significantly different between the two groups (45.3ml vs. 45.4ml; p=1.0). After catheter removal, median absolute volume reduction (BPV 12.4ml, LV 6.55ml) as well as relative volume reduction (27.8% vs. 16.4%) were significantly higher in the BPV group (p<0.001). After six weeks (42.9% vs. 33.3%) and six months (47.2% vs. 39.7%), relative volume reduction remained significantly higher in the BPV group (p<0.001). Absolute volume reduction was non-significantly higher in the BPV group after six weeks (18.4ml, 13.8ml; p=0.051) and six months (20.8ml, 18ml; p=0.3). Clinical outcome parameters improved significantly in both groups without relevant differences between the groups. Conclusions: Both vaporization techniques result in efficient tissue ablation with initial prostatic swelling. BPV seems to be superior due to a higher relative volume reduction. This difference had no clinical impact after a follow-up of 6M.

  10. Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongfeng; Qu, Shaobo; Wang, Jiafu

    2014-06-02

    Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.

  11. Symmetry reduction and exact solutions of two higher-dimensional nonlinear evolution equations.

    PubMed

    Gu, Yongyi; Qi, Jianming

    2017-01-01

    In this paper, symmetries and symmetry reduction of two higher-dimensional nonlinear evolution equations (NLEEs) are obtained by Lie group method. These NLEEs play an important role in nonlinear sciences. We derive exact solutions to these NLEEs via the [Formula: see text]-expansion method and complex method. Five types of explicit function solutions are constructed, which are rational, exponential, trigonometric, hyperbolic and elliptic function solutions of the variables in the considered equations.

  12. Graph embedding and extensions: a general framework for dimensionality reduction.

    PubMed

    Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen

    2007-01-01

    Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.

  13. Participatory three dimensional mapping for the preparation of landslide disaster risk reduction program

    NASA Astrophysics Data System (ADS)

    Kusratmoko, Eko; Wibowo, Adi; Cholid, Sofyan; Pin, Tjiong Giok

    2017-07-01

    This paper presents the results of applications of participatory three dimensional mapping (P3DM) method for fqcilitating the people of Cibanteng' village to compile a landslide disaster risk reduction program. Physical factors, as high rainfall, topography, geology and land use, and coupled with the condition of demographic and social-economic factors, make up the Cibanteng region highly susceptible to landslides. During the years 2013-2014 has happened 2 times landslides which caused economic losses, as a result of damage to homes and farmland. Participatory mapping is one part of the activities of community-based disaster risk reduction (CBDRR)), because of the involvement of local communities is a prerequisite for sustainable disaster risk reduction. In this activity, participatory mapping method are done in two ways, namely participatory two-dimensional mapping (P2DM) with a focus on mapping of disaster areas and participatory three-dimensional mapping (P3DM) with a focus on the entire territory of the village. Based on the results P3DM, the ability of the communities in understanding the village environment spatially well-tested and honed, so as to facilitate the preparation of the CBDRR programs. Furthermore, the P3DM method can be applied to another disaster areas, due to it becomes a medium of effective dialogue between all levels of involved communities.

  14. Geometric mean for subspace selection.

    PubMed

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  15. Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery

    NASA Astrophysics Data System (ADS)

    Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia

    2016-03-01

    Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.

  16. Visualizing phylogenetic tree landscapes.

    PubMed

    Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

    2017-02-02

    Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.

  17. A parametric model order reduction technique for poroelastic finite element models.

    PubMed

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  18. Euclidean sections of protein conformation space and their implications in dimensionality reduction

    PubMed Central

    Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong

    2014-01-01

    Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095

  19. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  20. Assessment of placental volume and vascularization at 11-14 weeks of gestation in a Taiwanese population using three-dimensional power Doppler ultrasound.

    PubMed

    Wang, Hsing-I; Yang, Ming-Jie; Wang, Peng-Hui; Wu, Yi-Cheng; Chen, Chih-Yao

    2014-12-01

    The placental volume and vascular indices are crucial in helping doctors to evaluate early fetal growth and development. Inadequate placental volume or vascularity might indicate poor fetal growth or gestational complications. This study aimed to evaluate the placental volume and vascular indices during the period of 11-14 weeks of gestation in a Taiwanese population. From June 2006 to September 2009, three-dimensional power Doppler ultrasound was performed in 222 normal pregnancies from 11-14 weeks of gestation. Power Doppler ultrasound was applied to the placenta and the placental volume was obtained by a rotational technique (VOCAL). The three-dimensional power histogram was used to assess the placental vascular indices, including the mean gray value, the vascularization index, the flow index, and the vascularization flow index. The placental vascular indices were then plotted against gestational age (GA) and placental volume. Our results showed that the linear regression equation for placental volume using gestational week as the independent variable was placental volume = 18.852 × GA - 180.89 (r = 0.481, p < 0.05). All the placental vascular indices showed a constant distribution throughout the period 11-14 weeks of gestation. A tendency for a reduction in the placental mean gray value with gestational week was observed, but without statistical significance. All the placental vascular indices estimated by three-dimensional power Doppler ultrasonography showed a constant distribution throughout gestation. Copyright © 2014. Published by Elsevier Taiwan.

  1. Internal Kinematics of the Tongue Following Volume Reduction

    PubMed Central

    SHCHERBATYY, VOLODYMYR; PERKINS, JONATHAN A.; LIU, ZI-JUN

    2008-01-01

    This study was undertaken to determine the functional consequences following tongue volume reduction on tongue internal kinematics during mastication and neuromuscular stimulation in a pig model. Six ultrasonic-crystals were implanted into the tongue body in a wedge-shaped configuration which allows recording distance changes in the bilateral length (LENG) and posterior thickness (THICK), as well as anterior (AW), posterior dorsal (PDW), and ventral (PVW) widths in 12 Yucatan-minipigs. Six animals received a uniform mid-sagittal tongue volume reduction surgery (reduction), and the other six had identical incisions without tissue removal (sham). The initial-distances among each crystal-pairs were recorded before, and immediately after surgery to calculate the dimensional losses. Referring to the initial-distance there were 3−66% and 1−4% tongue dimensional losses by the reduction and sham surgeries, respectively. The largest deformation in sham animals during mastication was in AW, significantly larger than LENG, PDW, PVW, and THICK (P < 0.01−0.001). In reduction animals, however, these deformational changes significantly diminished and enhanced in the anterior and posterior tongue, respectively (P < 0.05−0.001). In both groups, neuromuscular stimulation produced deformational ranges that were 2−4 times smaller than those occurred during chewing. Furthermore, reduction animals showed significantly decreased ranges of deformation in PVW, LENG, and THICK (P < 0.05−0.01). These results indicate that tongue volume reduction alters the tongue internal kinematics, and the dimensional losses in the anterior tongue caused by volume reduction can be compensated by increased deformations in the posterior tongue during mastication. This compensatory effect, however, diminishes during stimulation of the hypoglossal nerve and individual tongue muscles. PMID:18484603

  2. Numerical simulation of the control of the three-dimensional transition process in boundary layers

    NASA Technical Reports Server (NTRS)

    Kral, L. D.; Fasel, H. F.

    1990-01-01

    Surface heating techniques to control the three-dimensional laminar-turbulent transition process are numerically investigated for a water boundary layer. The Navier-Stokes and energy equations are solved using a fully implicit finite difference/spectral method. The spatially evolving boundary layer is simulated. Results of both passive and active methods of control are shown for small amplitude two-dimensional and three-dimensional disturbance waves. Control is also applied to the early stages of the secondary instability process using passive or active control techniques.

  3. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  4. Cardiac Dose Reduction with Deep-Inspiratory Breath Hold Technique of Radiotherapy for Left-Sided Breast Cancer.

    PubMed

    Sripathi, Lalitha Kameshwari; Ahlawat, Parveen; Simson, David K; Khadanga, Chira Ranjan; Kamarsu, Lakshmipathi; Surana, Shital Kumar; Arasu, Kavi; Singh, Harpreet

    2017-01-01

    Different techniques of radiation therapy have been studied to reduce the cardiac dose in left breast cancer. In this prospective dosimetric study, the doses to heart as well as other organs at risk (OAR) were compared between free-breathing (FB) and deep inspiratory breath hold (DIBH) techniques in intensity modulated radiotherapy (IMRT) and opposed-tangent three-dimensional radiotherapy (3DCRT) plans. Fifteen patients with left-sided breast cancer underwent computed tomography simulation and images were obtained in both FB and DIBH. Radiotherapy plans were generated with 3DCRT and IMRT techniques in FB and DIBH images in each patient. Target coverage, conformity index, homogeneity index, and mean dose to heart (Heart D mean ), left lung, left anterior descending artery (LAD) and right breast were compared between the four plans using the Wilcoxon signed rank test. Target coverage was adequate with both 3DCRT and IMRT plans, but IMRT plans showed better conformity and homogeneity. A statistically significant dose reduction of all OARs was found with DIBH. 3DCRT DIBH decreased the Heart D mean by 53.5% (7.1 vs. 3.3 Gy) and mean dose to LAD by 28% compared to 3DCRT FB . IMRT further lowered mean LAD dose by 18%. Heart D mean was lower with 3DCRT DIBH over IMRT DIBH (3.3 vs. 10.2 Gy). Mean dose to the contralateral breast was also lower with 3DCRT over IMRT (0.32 vs. 3.35 Gy). Mean dose and the V 20 of ipsilateral lung were lower with 3DCRT DIBH over IMRT DIBH (13.78 vs. 18.9 Gy) and (25.16 vs. 32.95%), respectively. 3DCRT DIBH provided excellent dosimetric results in patients with left-sided breast cancer without the need for IMRT.

  5. Cardiac Dose Reduction with Deep-Inspiratory Breath Hold Technique of Radiotherapy for Left-Sided Breast Cancer

    PubMed Central

    Sripathi, Lalitha Kameshwari; Ahlawat, Parveen; Simson, David K; Khadanga, Chira Ranjan; Kamarsu, Lakshmipathi; Surana, Shital Kumar; Arasu, Kavi; Singh, Harpreet

    2017-01-01

    Introduction: Different techniques of radiation therapy have been studied to reduce the cardiac dose in left breast cancer. Aim: In this prospective dosimetric study, the doses to heart as well as other organs at risk (OAR) were compared between free-breathing (FB) and deep inspiratory breath hold (DIBH) techniques in intensity modulated radiotherapy (IMRT) and opposed-tangent three-dimensional radiotherapy (3DCRT) plans. Materials and Methods: Fifteen patients with left-sided breast cancer underwent computed tomography simulation and images were obtained in both FB and DIBH. Radiotherapy plans were generated with 3DCRT and IMRT techniques in FB and DIBH images in each patient. Target coverage, conformity index, homogeneity index, and mean dose to heart (Heart Dmean), left lung, left anterior descending artery (LAD) and right breast were compared between the four plans using the Wilcoxon signed rank test. Results: Target coverage was adequate with both 3DCRT and IMRT plans, but IMRT plans showed better conformity and homogeneity. A statistically significant dose reduction of all OARs was found with DIBH. 3DCRTDIBH decreased the Heart Dmean by 53.5% (7.1 vs. 3.3 Gy) and mean dose to LAD by 28% compared to 3DCRTFB. IMRT further lowered mean LAD dose by 18%. Heart Dmean was lower with 3DCRTDIBH over IMRTDIBH (3.3 vs. 10.2 Gy). Mean dose to the contralateral breast was also lower with 3DCRT over IMRT (0.32 vs. 3.35 Gy). Mean dose and the V20 of ipsilateral lung were lower with 3DCRTDIBH over IMRTDIBH (13.78 vs. 18.9 Gy) and (25.16 vs. 32.95%), respectively. Conclusions: 3DCRTDIBH provided excellent dosimetric results in patients with left-sided breast cancer without the need for IMRT. PMID:28974856

  6. Comparison of cardiac and lung doses for breast cancer patients with free breathing and deep inspiration breath hold technique in 3 dimensional conformal radiotherapy - a dosimetric study

    NASA Astrophysics Data System (ADS)

    Raj Mani, Karthick; Poudel, Suresh; Maria Das, K. J.

    2017-12-01

    Purpose: To investigate the cardio-pulmonary doses between Deep Inspiration Breath Hold (DIBH) and Free Breathing (FB) technique in left sided breast irradiation. Materials & Methods: DIBH CT and FB CT were acquired for 10 left sided breast patients who underwent whole breast irradiation with or without nodal irradiation. Three fields single isocenter technique were used for patients with node positive patients along with two tangential conformal fields whereas only two tangential fields were used in node negative patients. All the critical structures like lungs, heart, esophagus, thyroid, etc., were delineated in both DIBH and FB scan. Both DIBH and FB scans were fused with the Dicom origin as they were acquired with the same Dicom coordinates. Plans were created in the DIBH scan for a dose range between 50 Gy in 25 fractions. Critical structures doses were recorded from the Dose Volume Histogram for both the DIBH and FB data set for evaluation. Results: The average mean heart dose in DIBH vs FB was 13.18 Gy vs 6.97 Gy, (p = 0.0063) significantly with DIBH as compared to FB technique. The relative reduction in average mean heart dose was 47.12%. The relative V5 reduced by 14.70% (i.e. 34.42% vs 19.72%, p = 0.0080), V10 reduced by 13.83% (i.e. 27.79 % vs 13.96%, p = 0.0073). V20 reduced by 13.19% (i.e. 24.54 % vs 11.35%, p = 0.0069), V30 reduced by 12.38% (i.e. 22.27 % vs 9.89 %, p = 0.0073) significantly with DIBH as compared to FB. The average mean left lung dose reduced marginally by 1.43 Gy (13.73 Gy vs 12.30 Gy, p = 0.4599) but insignificantly with DIBH as compared to FB. Other left lung parameters (V5, V10, V20 and V30) shows marginal decreases in DIBH plans compare to FB plans. Conclusion: DIBH shows a substantial reduction of cardiac doses but slight and insignificant reduction of pulmonary doses as compared with FB technique. Using the simple DIBH technique, we can effectively reduce the cardiac morbidity and at the same time radiation induced lung pneumonitis is unlikely to increase.

  7. A Review on Dimension Reduction

    PubMed Central

    Ma, Yanyuan; Zhu, Liping

    2013-01-01

    Summary Summarizing the effect of many covariates through a few linear combinations is an effective way of reducing covariate dimension and is the backbone of (sufficient) dimension reduction. Because the replacement of high-dimensional covariates by low-dimensional linear combinations is performed with a minimum assumption on the specific regression form, it enjoys attractive advantages as well as encounters unique challenges in comparison with the variable selection approach. We review the current literature of dimension reduction with an emphasis on the two most popular models, where the dimension reduction affects the conditional distribution and the conditional mean, respectively. We discuss various estimation and inference procedures in different levels of detail, with the intention of focusing on their underneath idea instead of technicalities. We also discuss some unsolved problems in this area for potential future research. PMID:23794782

  8. A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Rosen, Mark; Madabhushi, Anant

    2008-03-01

    Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation method that employs manifold learning via consensus schemes for detection of cancerous regions from high resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness, our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes. Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is successfully able to detect suspicious regions in the prostate.

  9. Dynamic Data-Driven Reduced-Order Models of Macroscale Quantities for the Prediction of Equilibrium System State for Multiphase Porous Medium Systems

    NASA Astrophysics Data System (ADS)

    Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.

    2017-12-01

    Microscale simulation of multiphase flow in realistic, highly-resolved porous medium systems of a sufficient size to support macroscale evaluation is computationally demanding. Such approaches can, however, reveal the dynamic, steady, and equilibrium states of a system. We evaluate methods to utilize dynamic data to reduce the cost associated with modeling a steady or equilibrium state. We construct data-driven models using extensions to dynamic mode decomposition (DMD) and its connections to Koopman Operator Theory. DMD and its variants comprise a class of equation-free methods for dimensionality reduction of time-dependent nonlinear dynamical systems. DMD furnishes an explicit reduced representation of system states in terms of spatiotemporally varying modes with time-dependent oscillation frequencies and amplitudes. We use DMD to predict the steady and equilibrium macroscale state of a realistic two-fluid porous medium system imaged using micro-computed tomography (µCT) and simulated using the lattice Boltzmann method (LBM). We apply Koopman DMD to direct numerical simulation data resulting from simulations of multiphase fluid flow through a 1440x1440x4320 section of a full 1600x1600x5280 realization of imaged sandstone. We determine a representative set of system observables via dimensionality reduction techniques including linear and kernel principal component analysis. We demonstrate how this subset of macroscale quantities furnishes a representation of the time-evolution of the system in terms of dynamic modes, and discuss the selection of a subset of DMD modes yielding the optimal reduced model, as well as the time-dependence of the error in the predicted equilibrium value of each macroscale quantity. Finally, we describe how the above procedure, modified to incorporate methods from compressed sensing and random projection techniques, may be used in an online fashion to facilitate adaptive time-stepping and parsimonious storage of system states over time.

  10. Three New (2+1)-dimensional Integrable Systems and Some Related Darboux Transformations

    NASA Astrophysics Data System (ADS)

    Guo, Xiu-Rong

    2016-06-01

    We introduce two operator commutators by using different-degree loop algebras of the Lie algebra A1, then under the framework of zero curvature equations we generate two (2+1)-dimensional integrable hierarchies, including the (2+1)-dimensional shallow water wave (SWW) hierarchy and the (2+1)-dimensional Kaup-Newell (KN) hierarchy. Through reduction of the (2+1)-dimensional hierarchies, we get a (2+1)-dimensional SWW equation and a (2+1)-dimensional KN equation. Furthermore, we obtain two Darboux transformations of the (2+1)-dimensional SWW equation. Similarly, the Darboux transformations of the (2+1)-dimensional KN equation could be deduced. Finally, with the help of the spatial spectral matrix of SWW hierarchy, we generate a (2+1) heat equation and a (2+1) nonlinear generalized SWW system containing inverse operators with respect to the variables x and y by using a reduction spectral problem from the self-dual Yang-Mills equations. Supported by the National Natural Science Foundation of China under Grant No. 11371361, the Shandong Provincial Natural Science Foundation of China under Grant Nos. ZR2012AQ011, ZR2013AL016, ZR2015EM042, National Social Science Foundation of China under Grant No. 13BJY026, the Development of Science and Technology Project under Grant No. 2015NS1048 and A Project of Shandong Province Higher Educational Science and Technology Program under Grant No. J14LI58

  11. Use of dynamic 3-dimensional transvaginal and transrectal ultrasonography to assess posterior pelvic floor dysfunction related to obstructed defecation.

    PubMed

    Murad-Regadas, Sthela M; Regadas Filho, Francisco Sergio Pinheiro; Regadas, Francisco Sergio Pinheiro; Rodrigues, Lusmar Veras; de J R Pereira, Jacyara; da S Fernandes, Graziela Olivia; Dealcanfreitas, Iris Daiana; Mendonca Filho, Jose Jader

    2014-02-01

    New ultrasound techniques may complement current diagnostic tools, and combined techniques may help to overcome the limitations of individual techniques for the diagnosis of anorectal dysfunction. A high degree of agreement has been demonstrated between echodefecography (dynamic 3-dimensional anorectal ultrasonography) and conventional defecography. Our aim was to evaluate the ability of a combined approach consisting of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a 3-dimensional biplane endoprobe to assess posterior pelvic floor dysfunctions related to obstructed defecation syndrome in comparison with echodefecography. This was a prospective, observational cohort study conducted at a tertiary-care hospital. Consecutive female patients with symptoms of obstructed defecation were eligible. Each patient underwent assessment of posterior pelvic floor dysfunctions with a combination of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a biplane transducer and with echodefecography. Kappa (κ) was calculated as an index of agreement between the techniques. Diagnostic accuracy (sensitivity, specificity, and positive and negative predictive values) of the combined technique in detection of posterior dysfunctions was assessed with echodefecography as the standard for comparison. A total of 33 women were evaluated. Substantial agreement was observed regarding normal relaxation and anismus. In detecting the absence or presence of rectocele, the 2 methods agreed in all cases. Near-perfect agreement was found for rectocele grade I, grade II, and grade III. Perfect agreement was found for entero/sigmoidocele, with near-perfect agreement for rectal intussusception. Using echodefecography as the standard for comparison, we found high diagnostic accuracy of transvaginal and transrectal ultrasonography in the detection of posterior dysfunctions. This combined technique should be compared with other dynamic techniques and validated with conventional defecography. Dynamic 3-dimensional transvaginal and transrectal ultrasonography is a simple and fast ultrasound technique that shows strong agreement with echodefecography and may be used as an alternative method to assess patients with obstructed defecation syndrome.

  12. Hierarchical classification in high dimensional numerous class cases

    NASA Technical Reports Server (NTRS)

    Kim, Byungyong; Landgrebe, D. A.

    1990-01-01

    As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed: a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences. The mathematical relationship between sample size, dimensionality, and risk value is derived.

  13. The development of laser speckle velocimetry for the study of vortical flows

    NASA Technical Reports Server (NTRS)

    Krothapalli, A.

    1991-01-01

    A new experimental technique commonly known as PIDV (particle image displacement velocity) was developed to measure an instantaneous two dimensional velocity fluid in a selected plane of the flow field. This technique was successfully applied to the study of several problems: (1) unsteady flows with large scale vortical structures; (2) the instantaneous two dimensional flow in the transition region of a rectangular air jet; and (3) the instantaneous flow over a circular bump in a transonic flow. In several other experiments PIDV is routinely used as a non-intrusive measurement technique to obtain instantaneous two dimensional velocity fields.

  14. Reduction of Large Dynamical Systems by Minimization of Evolution Rate

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1999-01-01

    Reduction of a large system of equations to a lower-dimensional system of similar dynamics is investigated. For dynamical systems with disparate timescales, a criterion for determining redundant dimensions and a general reduction method based on the minimization of evolution rate are proposed.

  15. Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.

    PubMed

    Bloom, David J; Lee, Soo-Yeun

    2016-09-01

    Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®

  16. Overview of Three-Dimensional Atomic-Resolution Holography and Imaging Techniques: Recent Advances in Local-Structure Science

    NASA Astrophysics Data System (ADS)

    Daimon, Hiroshi

    2018-06-01

    Local three-dimensional (3D) atomic arrangements without periodicity have not been able to be studied until recently. Recently, several holographies and related techniques have been developed to reveal the 3D atomic arrangement around specific atoms with no translational symmetry. This review gives an overview of these new local 3D atomic imaging techniques.

  17. Influence of the Numerical Scheme on the Solution Quality of the SWE for Tsunami Numerical Codes: The Tohoku-Oki, 2011Example.

    NASA Astrophysics Data System (ADS)

    Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.

    2015-12-01

    Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.

  18. Milch versus Stimson technique for nonsedated reduction of anterior shoulder dislocation: a prospective randomized trial and analysis of factors affecting success.

    PubMed

    Amar, Eyal; Maman, Eran; Khashan, Morsi; Kauffman, Ehud; Rath, Ehud; Chechik, Ofir

    2012-11-01

    The shoulder is regarded as the most commonly dislocated major joint in the human body. Most dislocations can be reduced by simple methods in the emergency department, whereas others require more complicated approaches. We compared the efficacy, safety, pain, and duration of the reduction between the Milch technique and the Stimson technique in treating dislocations. We also identified factors that affected success rate. All enrolled patients were randomized to either the Milch technique or the Stimson technique for dislocated shoulder reduction. The study cohort consisted of 60 patients (mean age, 43.9 years; age range, 18-88 years) who were randomly assigned to treatment by either the Stimson technique (n = 25) or the Milch technique (n = 35). Oral analgesics were available for both groups. The 2 groups were similar in demographics, patient characteristics, and pain levels. The first reduction attempt in the Milch and Stimson groups was successful in 82.8% and 28% of cases, respectively (P < .001), and the mean reduction time was 4.68 and 8.84 minutes, respectively (P = .007). The success rate was found to be affected by the reduction technique, the interval between dislocation occurrence and first reduction attempt, and the pain level on admittance. The success rate and time to achieve reduction without sedation were superior for the Milch technique compared with the Stimson technique. Early implementation of reduction measures and low pain levels at presentation favor successful reduction, which--in combination with oral pain medication--constitutes an acceptable and reasonable management alternative to reduction with sedation. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  19. Recent Developments in Gun Operating Techniques at the NASA Ames Ballistic Ranges

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, R. J.

    1996-01-01

    This paper describes recent developments in gun operating techniques at the Ames ballistic range complex. This range complex has been in operation since the early 1960s. Behavior of sabots during separation and projectile-target impact phenomena have long been observed by means of short-duration flash X-rays: new versions allow operation in the lower-energy ("soft") X-ray range and have been found to be more effective than the earlier designs. The dynamics of sabot separation is investigated in some depth from X-ray photographs of sabots launched in the Ames 1.0 in and 1.5 in guns; the sabot separation dynamics appears to be in reasonably good agreement with standard aerodynamic theory. Certain sabot packages appear to suffer no erosion or plastic deformation on traversing the gun barrel, contrary to what would be expected. Gun erosion data from the Ames 0.5 in, 1.0 in, and 1.5 in guns is examined in detail and can be correlated with a particular non- dimensionalized powder mass parameter. The gun erosion increases very rapidly as this parameter is increased. Representative shapes of eroded gun barrels are given. Guided by a computational fluid dynamics (CFD) code, the operating conditions of the Ames 0.5 in and 1.5 in guns were modified. These changes involved: (1) reduction in the piston mass, powder mass and hydrogen fill pressure and (2) reduction in pump tube volume, while maintaining hydrogen mass. These changes resulted in muzzle velocity increases of 0.5-0.8 km/sec, achieved simultaneously with 30-50 percent reductions in gun erosion.

  20. Nonlinear dimensionality reduction of data lying on the multicluster manifold.

    PubMed

    Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben

    2008-08-01

    A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.

  1. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  2. Clustering and Dimensionality Reduction to Discover Interesting Patterns in Binary Data

    NASA Astrophysics Data System (ADS)

    Palumbo, Francesco; D'Enza, Alfonso Iodice

    The attention towards binary data coding increased consistently in the last decade due to several reasons. The analysis of binary data characterizes several fields of application, such as market basket analysis, DNA microarray data, image mining, text mining and web-clickstream mining. The paper illustrates two different approaches exploiting a profitable combination of clustering and dimensionality reduction for the identification of non-trivial association structures in binary data. An application in the Association Rules framework supports the theory with the empirical evidence.

  3. Argyres–Douglas theories, S 1 reductions, and topological symmetries

    DOE PAGES

    Buican, Matthew; Nishinaka, Takahiro

    2015-12-21

    In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less

  4. Argyres–Douglas theories, S 1 reductions, and topological symmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buican, Matthew; Nishinaka, Takahiro

    In a recent paper, we proposed closed-form expressions for the superconformal indices of the (A(1), A(2n-3)) and(A(1), D-2n) Argyres-Douglas (AD) superconformal field theories (SCFTs) in the Schur limit. Following up on our results, we turn our attention to the small S-1 regime of these indices. As expected on general grounds, our study reproduces the S-3 partition functions of the resulting dimensionally reduced theories. However, we show that in all cases-with the exception of the reduction of the (A(1), D-4) SCFTcertain imaginary partners of real mass terms are turned on in the corresponding mirror theories. We interpret these deformations as Rmore » symmetry mixing with the topological symmetries of the direct S-1 reductions. Moreover, we argue that these shifts occur in any of our theories whose four-dimensional N = 2 superconformal U(1)(R) symmetry does not obey an SU(2) quantization condition. We then use our R symmetry map to find the fourdimensional ancestors of certain three-dimensional operators. Somewhat surprisingly, this picture turns out to imply that the scaling dimensions of many of the chiral operators of the four-dimensional theory are encoded in accidental symmetries of the three-dimensional theory. We also comment on the implications of our work on the space of general N = 2 SCFTs.« less

  5. Features in chemical kinetics. I. Signatures of self-emerging dimensional reduction from a general format of the evolution law

    NASA Astrophysics Data System (ADS)

    Nicolini, Paolo; Frezzato, Diego

    2013-06-01

    Simplification of chemical kinetics description through dimensional reduction is particularly important to achieve an accurate numerical treatment of complex reacting systems, especially when stiff kinetics are considered and a comprehensive picture of the evolving system is required. To this aim several tools have been proposed in the past decades, such as sensitivity analysis, lumping approaches, and exploitation of time scales separation. In addition, there are methods based on the existence of the so-called slow manifolds, which are hyper-surfaces of lower dimension than the one of the whole phase-space and in whose neighborhood the slow evolution occurs after an initial fast transient. On the other hand, all tools contain to some extent a degree of subjectivity which seems to be irremovable. With reference to macroscopic and spatially homogeneous reacting systems under isothermal conditions, in this work we shall adopt a phenomenological approach to let self-emerge the dimensional reduction from the mathematical structure of the evolution law. By transforming the original system of polynomial differential equations, which describes the chemical evolution, into a universal quadratic format, and making a direct inspection of the high-order time-derivatives of the new dynamic variables, we then formulate a conjecture which leads to the concept of an "attractiveness" region in the phase-space where a well-defined state-dependent rate function ω has the simple evolution dot{ω }= - ω ^2 along any trajectory up to the stationary state. This constitutes, by itself, a drastic dimensional reduction from a system of N-dimensional equations (being N the number of chemical species) to a one-dimensional and universal evolution law for such a characteristic rate. Step-by-step numerical inspections on model kinetic schemes are presented. In the companion paper [P. Nicolini and D. Frezzato, J. Chem. Phys. 138, 234102 (2013)], 10.1063/1.4809593 this outcome will be naturally related to the appearance (and hence, to the definition) of the slow manifolds.

  6. Automating X-ray Fluorescence Analysis for Rapid Astrobiology Surveys.

    PubMed

    Thompson, David R; Flannery, David T; Lanka, Ravi; Allwood, Abigail C; Bue, Brian D; Clark, Benton C; Elam, W Timothy; Estlin, Tara A; Hodyss, Robert P; Hurowitz, Joel A; Liu, Yang; Wade, Lawrence A

    2015-11-01

    A new generation of planetary rover instruments, such as PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning Habitable Environments with Raman Luminescence for Organics and Chemicals) selected for the Mars 2020 mission rover payload, aim to map mineralogical and elemental composition in situ at microscopic scales. These instruments will produce large spectral cubes with thousands of channels acquired over thousands of spatial locations, a large potential science yield limited mainly by the time required to acquire a measurement after placement. A secondary bottleneck also faces mission planners after downlink; analysts must interpret the complex data products quickly to inform tactical planning for the next command cycle. This study demonstrates operational approaches to overcome these bottlenecks by specialized early-stage science data processing. Onboard, simple real-time systems can perform a basic compositional assessment, recognizing specific features of interest and optimizing sensor integration time to characterize anomalies. On the ground, statistically motivated visualization can make raw uncalibrated data products more interpretable for tactical decision making. Techniques such as manifold dimensionality reduction can help operators comprehend large databases at a glance, identifying trends and anomalies in data. These onboard and ground-side analyses can complement a quantitative interpretation. We evaluate system performance for the case study of PIXL, an X-ray fluorescence spectrometer. Experiments on three representative samples demonstrate improved methods for onboard and ground-side automation and illustrate new astrobiological science capabilities unavailable in previous planetary instruments. Dimensionality reduction-Planetary science-Visualization.

  7. Computed tomography image-guided surgery in complex acetabular fractures.

    PubMed

    Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A

    2000-01-01

    Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.

  8. Making ultrafine and highly-dispersive multimetallic nanoparticles in three-dimensional graphene with supercritical fluid as excellent electrocatalyst for oxygen reduction reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yazhou; Yen, Clive H.; Hu, Yun Hang

    2016-01-01

    Three-dimensional (3D) graphene showed an advanced support for designing porous electrode materials due to its high specific surface area, large pore volume, and excellent electronic property. However, the electrochemical properties of reported porous electrode materials still need to be improved further. The current challenge is how to deposit desirable nanoparticles (NPs) with controllable structure, loading and composition in 3D graphene while maintaining the high dispersion. Herein, we demonstrate a modified supercritical fluid (SCF) technique to address this issue by controlling the SCF system. Using this superior method, a series of Pt-based/3D graphene materials with the ultrafine-sized, highly dispersive and controllablemore » composition multimetallic NPs were successfully synthesized. Specifically, the resultant Pt40Fe60/3D graphene showed a significant enhancement in electrocatalytic performance for the oxygen reduction reaction (ORR), including a factor of 14.2 enhancement in mass activity (1.70 A mgPt 1), a factor of 11.9 enhancement in specific activity (1.55 mA cm 2), and higher durability compared with that of Pt/C catalyst. After careful comparison, the Pt40Fe60/3D graphene catalyst shows the higher ORR activity than most of the reported similar 3D graphene-based catalysts. The successful synthesis of such attractive materials by this method also paves the way to develop 3D graphene in widespread applications.« less

  9. A novel phase assignment protocol and driving system for a high-density focused ultrasound array.

    PubMed

    Caulfield, R Erich; Yin, Xiangtao; Juste, Jose; Hynynen, Kullervo

    2007-04-01

    Currently, most phased-array systems intended for therapy are one-dimensional (1-D) and use between 5 and 200 elements, with a few two-dimensional (2-D) systems using several hundred elements. The move toward lambda/2 interelement spacing, which provides complete 3-D beam steering, would require a large number of closely spaced elements (0.15 mm to 3 mm). A solution to the resulting problem of cost and cable assembly size, which this study examines, is to quantize the phases available at the array input. By connecting elements with similar phases to a single wire, a significant reduction in the number of incoming lines can be achieved while maintaining focusing and beam steering capability. This study has explored the feasibility of such an approach using computer simulations and experiments with a test circuit driving a 100-element linear array. Simulation results demonstrated that adequate focusing can be obtained with only four phase signals without large increases in the grating lobes or the dimensions of the focus. Experiments showed that the method can be implemented in practice, and adequate focusing can be achieved with four phase signals with a reduction of 20% in the peak pressure amplitude squared when compared with the infinite-phase resolution case. Results indicate that the use of this technique would make it possible to drive more than 10,000 elements with 33 input lines. The implementation of this method could have a large impact on ultrasound therapy and diagnostic devices.

  10. Simultaneous maxillary distraction osteogenesis using a twin-track distraction device combined with alveolar bone grafting in cleft patients: preliminary report of a technique.

    PubMed

    Suzuki, Eduardo Yugo; Watanabe, Masayo; Buranastidporn, Boonsiva; Baba, Yoshiyuki; Ohyama, Kimie; Ishii, Masatoshi

    2006-01-01

    The simultaneous use of cleft reduction and maxillary advancement by distraction osteogenesis has not been applied routinely because of the difficulty in three-dimensional control and stabilization of the transported segments. This report describes a new approach of simultaneous bilateral alveolar cleft reduction and maxillary advancement by distraction osteogenesis combined with autogenous bone grafting. A custom-made Twin-Track device was used to allow bilateral alveolar cleft closure combined with simultaneous maxillary advancement, using distraction osteogenesis and a rigid external distraction system in a bilateral cleft lip and palate patient. After a maxillary Le Fort I osteotomy, autogenous iliac bone graft was placed in the cleft spaces before suturing. A latency period of six days was observed before activation. The rate of activation was one mm/d for the maxillary advancement and 0.5 mm/d for the segmental transport. Accordingly, the concave facial appearance was improved with acceptable occlusion, and complete bilateral cleft closure was attained. No adjustments were necessary to the vector of the transported segments during the activation and no complications were observed. The proposed Twin-Track device, based on the concept of track-guided bone transport, permitted three-dimensional control over the distraction processes allowing simultaneous cleft closure, maxillary distraction, and autogenous bone grafting. The combined simultaneous approach is extremely advantageous in correcting severe deformities, reducing the number of surgical interventions and, consequently, the total treatment time.

  11. Self-organizing neural networks--an alternative way of cluster analysis in clinical chemistry.

    PubMed

    Reibnegger, G; Wachter, H

    1996-04-15

    Supervised learning schemes have been employed by several workers for training neural networks designed to solve clinical problems. We demonstrate that unsupervised techniques can also produce interesting and meaningful results. Using a data set on the chemical composition of milk from 22 different mammals, we demonstrate that self-organizing feature maps (Kohonen networks) as well as a modified version of error backpropagation technique yield results mimicking conventional cluster analysis. Both techniques are able to project a potentially multi-dimensional input vector onto a two-dimensional space whereby neighborhood relationships remain conserved. Thus, these techniques can be used for reducing dimensionality of complicated data sets and for enhancing comprehensibility of features hidden in the data matrix.

  12. Low-resistance gateless high electron mobility transistors using three-dimensional inverted pyramidal AlGaN/GaN surfaces

    NASA Astrophysics Data System (ADS)

    So, Hongyun; Senesky, Debbie G.

    2016-01-01

    In this letter, three-dimensional gateless AlGaN/GaN high electron mobility transistors (HEMTs) were demonstrated with 54% reduction in electrical resistance and 73% increase in surface area compared with conventional gateless HEMTs on planar substrates. Inverted pyramidal AlGaN/GaN surfaces were microfabricated using potassium hydroxide etched silicon with exposed (111) surfaces and metal-organic chemical vapor deposition of coherent AlGaN/GaN thin films. In addition, electrical characterization of the devices showed that a combination of series and parallel connections of the highly conductive two-dimensional electron gas along the pyramidal geometry resulted in a significant reduction in electrical resistance at both room and high temperatures (up to 300 °C). This three-dimensional HEMT architecture can be leveraged to realize low-power and reliable power electronics, as well as harsh environment sensors with increased surface area.

  13. Aerostructural analysis and design optimization of composite aircraft

    NASA Astrophysics Data System (ADS)

    Kennedy, Graeme James

    High-performance composite materials exhibit both anisotropic strength and stiffness properties. These anisotropic properties can be used to produce highly-tailored aircraft structures that meet stringent performance requirements, but these properties also present unique challenges for analysis and design. New tools and techniques are developed to address some of these important challenges. A homogenization-based theory for beams is developed to accurately predict the through-thickness stress and strain distribution in thick composite beams. Numerical comparisons demonstrate that the proposed beam theory can be used to obtain highly accurate results in up to three orders of magnitude less computational time than three-dimensional calculations. Due to the large finite-element model requirements for thin composite structures used in aerospace applications, parallel solution methods are explored. A parallel direct Schur factorization method is developed. The parallel scalability of the direct Schur approach is demonstrated for a large finite-element problem with over 5 million unknowns. In order to address manufacturing design requirements, a novel laminate parametrization technique is presented that takes into account the discrete nature of the ply-angle variables, and ply-contiguity constraints. This parametrization technique is demonstrated on a series of structural optimization problems including compliance minimization of a plate, buckling design of a stiffened panel and layup design of a full aircraft wing. The design and analysis of composite structures for aircraft is not a stand-alone problem and cannot be performed without multidisciplinary considerations. A gradient-based aerostructural design optimization framework is presented that partitions the disciplines into distinct process groups. An approximate Newton-Krylov method is shown to be an efficient aerostructural solution algorithm and excellent parallel scalability of the algorithm is demonstrated. An induced drag optimization study is performed to compare the trade-off between wing weight and induced drag for wing tip extensions, raked wing tips and winglets. The results demonstrate that it is possible to achieve a 43% induced drag reduction with no weight penalty, a 28% induced drag reduction with a 10% wing weight reduction, or a 20% wing weight reduction with a 5% induced drag penalty from a baseline wing obtained from a structural mass-minimization problem with fixed aerodynamic loads.

  14. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  15. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    PubMed

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  16. SLLE for predicting membrane protein types.

    PubMed

    Wang, Meng; Yang, Jie; Xu, Zhi-Jie; Chou, Kuo-Chen

    2005-01-07

    Introduction of the concept of pseudo amino acid composition (PROTEINS: Structure, Function, and Genetics 43 (2001) 246; Erratum: ibid. 44 (2001) 60) has made it possible to incorporate a considerable amount of sequence-order effects by representing a protein sample in terms of a set of discrete numbers, and hence can significantly enhance the prediction quality of membrane protein type. As a continuous effort along such a line, the Supervised Locally Linear Embedding (SLLE) technique for nonlinear dimensionality reduction is introduced (Science 22 (2000) 2323). The advantage of using SLLE is that it can reduce the operational space by extracting the essential features from the high-dimensional pseudo amino acid composition space, and that the cluster-tolerant capacity can be increased accordingly. As a consequence by combining these two approaches, high success rates have been observed during the tests of self-consistency, jackknife and independent data set, respectively, by using the simplest nearest neighbour classifier. The current approach represents a new strategy to deal with the problems of protein attribute prediction, and hence may become a useful vehicle in the area of bioinformatics and proteomics.

  17. A spectral-finite difference solution of the Navier-Stokes equations in three dimensions

    NASA Astrophysics Data System (ADS)

    Alfonsi, Giancarlo; Passoni, Giuseppe; Pancaldo, Lea; Zampaglione, Domenico

    1998-07-01

    A new computational code for the numerical integration of the three-dimensional Navier-Stokes equations in their non-dimensional velocity-pressure formulation is presented. The system of non-linear partial differential equations governing the time-dependent flow of a viscous incompressible fluid in a channel is managed by means of a mixed spectral-finite difference method, in which different numerical techniques are applied: Fourier decomposition is used along the homogeneous directions, second-order Crank-Nicolson algorithms are employed for the spatial derivatives in the direction orthogonal to the solid walls and a fourth-order Runge-Kutta procedure is implemented for both the calculation of the convective term and the time advancement. The pressure problem, cast in the Helmholtz form, is solved with the use of a cyclic reduction procedure. No-slip boundary conditions are used at the walls of the channel and cyclic conditions are imposed at the other boundaries of the computing domain.Results are provided for different values of the Reynolds number at several time steps of integration and are compared with results obtained by other authors.

  18. Three-dimensional confocal microscopy of the living cornea and ocular lens

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1991-07-01

    The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.

  19. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  20. Spectral Regression Discriminant Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Wu, J.; Huang, H.; Liu, J.

    2012-08-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.

  1. New techniques for experimental generation of two-dimensional blade-vortex interaction at low Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Booth, E., Jr.; Yu, J. C.

    1986-01-01

    An experimental investigation of two dimensional blade vortex interaction was held at NASA Langley Research Center. The first phase was a flow visualization study to document the approach process of a two dimensional vortex as it encountered a loaded blade model. To accomplish the flow visualization study, a method for generating two dimensional vortex filaments was required. The numerical study used to define a new vortex generation process and the use of this process in the flow visualization study were documented. Additionally, photographic techniques and data analysis methods used in the flow visualization study are examined.

  2. [Research progress of three-dimensional digital model for repair and reconstruction of knee joint].

    PubMed

    Tong, Lu; Li, Yanlin; Hu, Meng

    2013-01-01

    To review recent advance in the application and research of three-dimensional digital knee model. The recent original articles about three-dimensional digital knee model were extensively reviewed and analyzed. The digital three-dimensional knee model can simulate the knee complex anatomical structure very well. Based on this, there are some developments of new software and techniques, and good clinical results are achieved. With the development of computer techniques and software, the knee repair and reconstruction procedure has been improved, the operation will be more simple and its accuracy will be further improved.

  3. Integrative Role Of Cinematography In Biomechanics Research

    NASA Astrophysics Data System (ADS)

    Zernicke, Ronald F.; Gregor, Robert J.

    1982-02-01

    Cinematography is an integral element in the interdisciplinary biomechanics research conducted in the Department of Kinesiology at the University of California, Los Angeles. For either an isolated recording of a movement phenomenon or as a recording component which is synchronized with additional transducers and recording equipment, high speed motion picture film has been effectively incorporated into resr'arch projects ranging from two and three dimensional analyses of human movements, locomotor mechanics of cursorial mammals and primates, to the structural responses and dynamic geometries of skeletal muscles, tendons, and ligaments. The basic equipment used in these studies includes three, 16 mm high speed, pin-registered cameras which have the capacity for electronic phase-locking. Crystal oscillators provide the generator pulses to synchronize the timing lights of the cameras and the analog-to-digital recording equipment. A rear-projection system with a sonic digitizer permits quantification of film coordinates which are stored on computer disks. The capacity for synchronizing the high speed films with additional recording equipment provides an effective means of obtaining not only position-time data from film, but also electromyographic, force platform, tendon force transducer, and strain gauge recordings from tissues or moving organisms. During the past few years, biomechanics research which comprised human studies has used both planar and three-dimensional cinematographic techniques. The studies included planar analyses which range from the gait characteristics of lower extremity child amputees to the running kinematics and kinetics of highly skilled sprinters and long-distance runners. The dynamics of race cycling and kinetics of gymnastic maneuvers were studied with cinematography and either a multi-dimensional force platform or a bicycle pedal with strain gauges to determine the time histories of the applied forces. The three-dimensional technique implemented at UCLA is the Direct Linear Transformation (DLT) method. DLT was developed from a close-range stereo-photogrammetry method to a technique flexible and accurate for 16 mm film applications in biomechanics. The DLT method has been used to document the three-dimensional kinematics of the ball, hand, forearm, and upper arm segments of pitchers during high velocity baseball throwing. The animal research which has incorporated cinematography has focused on both normal locomotor kinematics and kinetics, as well as spinalized locomotion, to assess neural control mechanisms which regulate gait. In addition, a new technique has been developed which allows the recording of in vivo tendon forces in an animal during unrestrained locomotion; via cinematography, movements of the limbs can be correlated with both myoelectric activity and tendon forces to analyze dynamics of muscle contractions during walking, running, and jumping. An additional area in which cinematography has proven useful is in the measurement of the architectural and structural deformations and strains which occur in skeletal muscles, tendons, and ligaments. These experiments have been done both in situ and in vitro, and have included both normal functional ranges of the tissues and incidences of mechanical failure or ruptures. The use of photographic techniques in these experiments is advantageous because the tissue changes can be documented without attaching mechanical apparatus to the tissue which can introduce artifacts. Although high speed cinematography does not solve all the data collection and recording needs in an integrated approach to biomechanics, it nevertheless forms an important constituent in a comprehensive research program. The positive attributes of high speed film records outweigh the laborious and tedious data reduction techniques which are frequently necessary to achieve high quality data.

  4. Utilising three-dimensional printing techniques when providing unique assistive devices: A case report.

    PubMed

    Day, Sarah Jane; Riley, Shaun Patrick

    2018-02-01

    The evolution of three-dimensional printing into prosthetics has opened conversations about the availability and cost of prostheses. This report will discuss how a prosthetic team incorporated additive manufacture techniques into the treatment of a patient with a partial hand amputation to create and test a unique assistive device which he could use to hold his French horn. Case description and methods: Using a process of shape capture, photogrammetry, computer-aided design and finite element analysis, a suitable assistive device was designed and tested. The design was fabricated using three-dimensional printing. Patient satisfaction was measured using a Pugh's Matrix™, and a cost comparison was made between the process used and traditional manufacturing. Findings and outcomes: Patient satisfaction was high. The three-dimensional printed devices were 56% cheaper to fabricate than a similar laminated device. Computer-aided design and three-dimensional printing proved to be an effective method for designing, testing and fabricating a unique assistive device. Clinical relevance CAD and 3D printing techniques can enable devices to be designed, tested and fabricated cheaper than when using traditional techniques. This may lead to improvements in quality and accessibility.

  5. CellTree: an R/bioconductor package to infer the hierarchical structure of cell populations from single-cell RNA-seq data.

    PubMed

    duVerle, David A; Yotsukura, Sohiya; Nomura, Seitaro; Aburatani, Hiroyuki; Tsuda, Koji

    2016-09-13

    Single-cell RNA sequencing is fast becoming one the standard method for gene expression measurement, providing unique insights into cellular processes. A number of methods, based on general dimensionality reduction techniques, have been suggested to help infer and visualise the underlying structure of cell populations from single-cell expression levels, yet their models generally lack proper biological grounding and struggle at identifying complex differentiation paths. Here we introduce cellTree: an R/Bioconductor package that uses a novel statistical approach, based on document analysis techniques, to produce tree structures outlining the hierarchical relationship between single-cell samples, while identifying latent groups of genes that can provide biological insights. With cellTree, we provide experimentalists with an easy-to-use tool, based on statistically and biologically-sound algorithms, to efficiently explore and visualise single-cell RNA data. The cellTree package is publicly available in the online Bionconductor repository at: http://bioconductor.org/packages/cellTree/ .

  6. Heavily Boron-Doped Silicon Layer for the Fabrication of Nanoscale Thermoelectric Devices

    PubMed Central

    Liu, Yang; Deng, Lingxiao; Zhang, Mingliang; Zhang, Shuyuan; Ma, Jing; Song, Peishuai; Liu, Qing; Ji, An; Yang, Fuhua; Wang, Xiaodong

    2018-01-01

    Heavily boron-doped silicon layers and boron etch-stop techniques have been widely used in the fabrication of microelectromechanical systems (MEMS). This paper provides an introduction to the fabrication process of nanoscale silicon thermoelectric devices. Low-dimensional structures such as silicon nanowire (SiNW) have been considered as a promising alternative for thermoelectric applications in order to achieve a higher thermoelectric figure of merit (ZT) than bulk silicon. Here, heavily boron-doped silicon layers and boron etch-stop processes for the fabrication of suspended SiNWs will be discussed in detail, including boron diffusion, electron beam lithography, inductively coupled plasma (ICP) etching and tetramethylammonium hydroxide (TMAH) etch-stop processes. A 7 μm long nanowire structure with a height of 280 nm and a width of 55 nm was achieved, indicating that the proposed technique is useful for nanoscale fabrication. Furthermore, a SiNW thermoelectric device has also been demonstrated, and its performance shows an obvious reduction in thermal conductivity. PMID:29385759

  7. Recent Advances in Characterization of Lignin Polymer by Solution-State Nuclear Magnetic Resonance (NMR) Methodology

    PubMed Central

    Wen, Jia-Long; Sun, Shao-Long; Xue, Bai-Liang; Sun, Run-Cang

    2013-01-01

    The demand for efficient utilization of biomass induces a detailed analysis of the fundamental chemical structures of biomass, especially the complex structures of lignin polymers, which have long been recognized for their negative impact on biorefinery. Traditionally, it has been attempted to reveal the complicated and heterogeneous structure of lignin by a series of chemical analyses, such as thioacidolysis (TA), nitrobenzene oxidation (NBO), and derivatization followed by reductive cleavage (DFRC). Recent advances in nuclear magnetic resonance (NMR) technology undoubtedly have made solution-state NMR become the most widely used technique in structural characterization of lignin due to its versatility in illustrating structural features and structural transformations of lignin polymers. As one of the most promising diagnostic tools, NMR provides unambiguous evidence for specific structures as well as quantitative structural information. The recent advances in two-dimensional solution-state NMR techniques for structural analysis of lignin in isolated and whole cell wall states (in situ), as well as their applications are reviewed. PMID:28809313

  8. Text Mining in Organizational Research

    PubMed Central

    Kobayashi, Vladimer B.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.

    2017-01-01

    Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies. PMID:29881248

  9. Text Mining in Organizational Research.

    PubMed

    Kobayashi, Vladimer B; Mol, Stefan T; Berkers, Hannah A; Kismihók, Gábor; Den Hartog, Deanne N

    2018-07-01

    Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.

  10. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  11. Proteomic data analysis of glioma cancer stem-cell lines based on novel nonlinear dimensional data reduction techniques

    NASA Astrophysics Data System (ADS)

    Lespinats, Sylvain; Pinker-Domenig, Katja; Wengert, Georg; Houben, Ivo; Lobbes, Marc; Stadlbauer, Andreas; Meyer-Bäse, Anke

    2016-05-01

    Glioma-derived cancer stem cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic data sets requires the development of new data mining and visualization approaches. Traditional techniques are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies in the application of novel approaches for the visualization, clustering and projection representation to unveil hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved clustering and visualization results provide a more detailed insight into the protein-level fold changes and putative upstream regulators for the GSCs. However the extracted molecular information is insufficient in classifying GSCs and paving the pathway to an improved therapeutics of the heterogeneous glioma.

  12. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Three Dimensional Illustrating--Three-Dimensional Vision and Deception of Sensibility

    ERIC Educational Resources Information Center

    Szállassy, Noémi; Gánóczy, Anita; Kriska, György

    2009-01-01

    The wide-spread digital photography and computer use gave the opportunity for everyone to make three-dimensional pictures and to make them public. The new opportunities with three-dimensional techniques give chance for the birth of new artistic photographs. We present in detail the biological roots of three-dimensional visualization, the phenomena…

  14. Stimulus Equalization: Temporary Reduction of Stimulus Complexity to Facilitate Discrimination Learning.

    ERIC Educational Resources Information Center

    Hoko, J. Aaron; LeBlanc, Judith M.

    1988-01-01

    Because disabled learners may profit from procedures using gradual stimulus change, this study utilized a microcomputer to investigate the effectiveness of stimulus equalization, an error reduction procedure involving an abrupt but temporary reduction of dimensional complexity. The procedure was found to be generally effective and implications for…

  15. A two-dimensional lattice equation as an extension of the Heideman-Hogan recurrence

    NASA Astrophysics Data System (ADS)

    Kamiya, Ryo; Kanki, Masataka; Mase, Takafumi; Tokihiro, Tetsuji

    2018-03-01

    We consider a two dimensional extension of the so-called linearizable mappings. In particular, we start from the Heideman-Hogan recurrence, which is known as one of the linearizable Somos-like recurrences, and introduce one of its two dimensional extensions. The two dimensional lattice equation we present is linearizable in both directions, and has the Laurent and the coprimeness properties. Moreover, its reduction produces a generalized family of the Heideman-Hogan recurrence. Higher order examples of two dimensional linearizable lattice equations related to the Dana Scott recurrence are also discussed.

  16. Analysis of aircraft tires via semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.

  17. Reduced-order model based feedback control of the modified Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.

    2013-04-15

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilizemore » the equilibrium and suppress the transition to drift-wave induced turbulence.« less

  18. Reduced-Order Model Based Feedback Control For Modified Hasegawa-Wakatani Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.

    2013-01-28

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modi ed Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in ow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then a modelbased feedback controller is designed for the reduced order model using linear quadratic regulators (LQR). Finally, a linear quadratic gaussian (LQG) controller, which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHWmore » equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.« less

  19. Spatial Dynamics Methods for Solitary Waves on a Ferrofluid Jet

    NASA Astrophysics Data System (ADS)

    Groves, M. D.; Nilsson, D. V.

    2018-04-01

    This paper presents existence theories for several families of axisymmetric solitary waves on the surface of an otherwise cylindrical ferrofluid jet surrounding a stationary metal rod. The ferrofluid, which is governed by a general (nonlinear) magnetisation law, is subject to an azimuthal magnetic field generated by an electric current flowing along the rod. The ferrohydrodynamic problem for axisymmetric travelling waves is formulated as an infinite-dimensional Hamiltonian system in which the axial direction is the time-like variable. A centre-manifold reduction technique is employed to reduce the system to a locally equivalent Hamiltonian system with a finite number of degrees of freedom, and homoclinic solutions to the reduced system, which correspond to solitary waves, are detected by dynamical-systems methods.

  20. Analysis of eletrectrohydrodynamic jetting using multifunctional and three-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Nguyen, Xuan Hung; Lee, Soo-Hong; Kim, Young Hyun

    2013-11-01

    Three-dimensional optical tomography technique was developed to reconstruct three-dimensional flow fields using a set of two-dimensional shadowgraphic images and normal gray images. From three high speed cameras, which were positioned at an offset angle of 45° relative to one another, number, size and location of electrohydrodynamic jets with respect to the nozzle position were analyzed using shadowgraphic tomography employing a multiplicative algebraic reconstruction technique (MART). Additionally, a flow field inside cone-shaped liquid (Taylor cone) which was induced under electric field was also observed using a simultaneous multiplicative algebraic reconstruction technique (SMART) for reconstructing intensities of particle light and combining with a three-dimensional cross correlation. Various velocity fields of a circulating flow inside the cone-shaped liquid due to different physico-chemical properties of liquid and applied voltages were also investigated. This work supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. S-2011-0023457).

  1. Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Hathaway, A. W.

    1978-01-01

    Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.

  2. Performance and analysis of a three-dimensional nonorthogonal laser Doppler anemometer

    NASA Technical Reports Server (NTRS)

    Snyder, P. K.; Orloff, K. L.; Aoyagi, K.

    1981-01-01

    A three dimensional laser Doppler anemometer with a nonorthogonal third axis coupled by 14 deg was designed and tested. A highly three dimensional flow field of a jet in a crossflow was surveyed to test the three dimensional capability of the instrument. Sample data are presented demonstrating the ability of the 3D LDA to resolve three orthogonal velocity components. Modifications to the optics, signal processing electronics, and data reduction methods are suggested.

  3. Generation Algorithm of Discrete Line in Multi-Dimensional Grids

    NASA Astrophysics Data System (ADS)

    Du, L.; Ben, J.; Li, Y.; Wang, R.

    2017-09-01

    Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.

  4. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Two-boundary grid generation for the solution of the three dimensional compressible Navier-Stokes equations. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1981-01-01

    A grid generation technique called the two boundary technique is developed and applied for the solution of the three dimensional Navier-Stokes equations. The Navier-Stokes equations are transformed from a cartesian coordinate system to a computational coordinate system, and the grid generation technique provides the Jacobian matrix describing the transformation. The two boundary technique is based on algebraically defining two distinct boundaries of a flow domain and the distribution of the grid is achieved by applying functions to the uniform computational grid which redistribute the computational independent variables and consequently concentrate or disperse the grid points in the physical domain. The Navier-Stokes equations are solved using a MacCormack time-split technique. Grids and supersonic laminar flow solutions are obtained for a family of three dimensional corners and two spike-nosed bodies.

  6. Three-dimensional head anthropometric analysis

    NASA Astrophysics Data System (ADS)

    Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James

    2003-05-01

    Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).

  7. Three-Dimensional Aeroelastic and Aerothermoelastic Behavior in Hypersonic Flow

    NASA Technical Reports Server (NTRS)

    McNamara, Jack J.; Friedmann, Peretz P.; Powell, Kenneth G.; Thuruthimattam, Biju J.; Bartels, Robert E.

    2005-01-01

    The aeroelastic and aerothermoelastic behavior of three-dimensional configurations in hypersonic flow regime are studied. The aeroelastic behavior of a low aspect ratio wing, representative of a fin or control surface on a generic hypersonic vehicle, is examined using third order piston theory, Euler and Navier-Stokes aerodynamics. The sensitivity of the aeroelastic behavior generated using Euler and Navier-Stokes aerodynamics to parameters governing temporal accuracy is also examined. Also, a refined aerothermoelastic model, which incorporates the heat transfer between the fluid and structure using CFD generated aerodynamic heating, is used to examine the aerothermoelastic behavior of the low aspect ratio wing in the hypersonic regime. Finally, the hypersonic aeroelastic behavior of a generic hypersonic vehicle with a lifting-body type fuselage and canted fins is studied using piston theory and Euler aerodynamics for the range of 2.5 less than or equal to M less than or equal to 28, at altitudes ranging from 10,000 feet to 80,000 feet. This analysis includes a study on optimal mesh selection for use with Euler aerodynamics. In addition to the aeroelastic and aerothermoelastic results presented, three time domain flutter identification techniques are compared, namely the moving block approach, the least squares curve fitting method, and a system identification technique using an Auto-Regressive model of the aeroelastic system. In general, the three methods agree well. The system identification technique, however, provided quick damping and frequency estimations with minimal response record length, and therefore o ers significant reductions in computational cost. In the present case, the computational cost was reduced by 75%. The aeroelastic and aerothermoelastic results presented illustrate the applicability of the CFL3D code for the hypersonic flight regime.

  8. Efficacy of patient-specific bolus created using three-dimensional printing technique in photon radiotherapy.

    PubMed

    Fujimoto, Koya; Shiinoki, Takehiro; Yuasa, Yuki; Hanazawa, Hideki; Shibuya, Keiko

    2017-06-01

    A commercially available bolus ("commercial-bolus") does not make complete contact with the irregularly shaped patient skin. This study aims to customise a patient-specific three-dimensional (3D) bolus using a 3D printing technique ("3D-bolus") and to evaluate its clinical feasibility for photon radiotherapy. The 3D-bolus was designed using a treatment planning system (TPS) in Digital Imaging and Communications in Medicine-Radiotherapy (DICOM-RT) format, and converted to stereolithographic format for printing. To evaluate its physical characteristics, treatment plans were created for water-equivalent phantoms that were bolus-free, or had a flat-form printed 3D-bolus, a TPS-designed bolus ("virtual-bolus"), or a commercial-bolus. These plans were compared based on the percentage depth dose (PDD) and target-volume dose volume histogram (DVH) measurements. To evaluate the clinical feasibility, treatment plans were created for head phantoms that were bolus-free or had a 3D-bolus, a virtual-bolus, or a commercial-bolus. These plans were compared based on the target volume DVH. In the physical evaluation, the 3D-bolus provided effective dose coverage in the build-up region, which was equivalent to the commercial-bolus. With regard to the clinical feasibility, the air gaps were lesser with the 3D-bolus when compared to the commercial-bolus. Furthermore, the prescription dose could be delivered appropriately to the target volume. The 3D-bolus has potential use for air-gap reduction compared to the commercial-bolus and facilitates target-volume dose coverage and homogeneity improvement. A 3D-bolus produced using a 3D printing technique is comparable to a commercial-bolus applied to an irregular-shaped skin surface. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Gauged supergravities from M-theory reductions

    NASA Astrophysics Data System (ADS)

    Katmadas, Stefanos; Tomasiello, Alessandro

    2018-04-01

    In supergravity compactifications, there is in general no clear prescription on how to select a finite-dimensional family of metrics on the internal space, and a family of forms on which to expand the various potentials, such that the lower-dimensional effective theory is supersymmetric. We propose a finite-dimensional family of deformations for regular Sasaki-Einstein seven-manifolds M 7, relevant for M-theory compactifications down to four dimensions. It consists of integrable Cauchy-Riemann structures, corresponding to complex deformations of the Calabi-Yau cone M 8 over M 7. The non-harmonic forms we propose are the ones contained in one of the Kohn-Rossi cohomology groups, which is finite-dimensional and naturally controls the deformations of Cauchy-Riemann structures. The same family of deformations can be also described in terms of twisted cohomology of the base M 6, or in terms of Milnor cycles arising in deformations of M 8. Using existing results on SU(3) structure compactifications, we briefly discuss the reduction of M-theory on our class of deformed Sasaki-Einstein manifolds to four-dimensional gauged supergravity.

  10. Supervised Classification Techniques for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Jimenez, Luis O.

    1997-01-01

    The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many mm-e spectral intervals than previous possible. An example of this technology is the AVIRIS system, which collects image data in 220 bands. The increased dimensionality of such hyperspectral data provides a challenge to the current techniques for analyzing such data. Human experience in three dimensional space tends to mislead one's intuition of geometrical and statistical properties in high dimensional space, properties which must guide our choices in the data analysis process. In this paper high dimensional space properties are mentioned with their implication for high dimensional data analysis in order to illuminate the next steps that need to be taken for the next generation of hyperspectral data classifiers.

  11. Acoustic and Auditory Perception Effects of the Voice Therapy Technique Finger Kazoo in Adult Women.

    PubMed

    Christmann, Mara Keli; Cielo, Carla Aparecida

    2017-05-01

    This study aimed to verify and to correlate acoustic and auditory-perceptual measures of glottic source after the performance of finger kazoo (FK) technique. This is an experimental, cross-sectional, and qualitative study. We made an analysis of the vowel [a:] in 46 adult women with neither vocal complaints nor laryngeal alterations, through the Multi-Dimensional Voice Program Advanced and RASATI scale, before and immediately after performing three series of FK and 5 minutes after a period of silence. Kappa, Friedman, Wilcoxon, and Spearman tests were used. We found significant increase in fundamental frequency, reduction of amplitude variation, and degree of sub-harmonics immediately after performing FK. Positive correlations were measures of frequency and its perturbation, measures of amplitude, of soft phonation index, of degree and number of unvoiced segments with aspects of RASATI. Negative correlations were voice turbulence index, measures of frequency and its perturbation, and measures of soft phonation index with aspects of RASATI. There was fundamental frequency increase, within normal limits, and reduction of acoustic measures related to presence of noise and instability. In general, acoustic measures, suggestive of noise and instability, were reduced according to the decrease of perceptive-auditory aspects of vocal alteration. It shows that both instruments are complementary and that the acoustic vocal effect was positive. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  13. Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.

    PubMed

    Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K

    1995-07-01

    The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.

  14. Extra-dimensional models on the lattice

    DOE PAGES

    Knechtli, Francesco; Rinaldi, Enrico

    2016-08-05

    In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less

  15. Comparison of intraoral scanning and conventional impression techniques using 3-dimensional superimposition.

    PubMed

    Rhee, Ye-Kyu; Huh, Yoon-Hyuk; Cho, Lee-Ra; Park, Chan-Jin

    2015-12-01

    The aim of this study is to evaluate the appropriate impression technique by analyzing the superimposition of 3D digital model for evaluating accuracy of conventional impression technique and digital impression. Twenty-four patients who had no periodontitis or temporomandibular joint disease were selected for analysis. As a reference model, digital impressions with a digital impression system were performed. As a test models, for conventional impression dual-arch and full-arch, impression techniques utilizing addition type polyvinylsiloxane for fabrication of cast were applied. 3D laser scanner is used for scanning the cast. Each 3 pairs for 25 STL datasets were imported into the inspection software. The three-dimensional differences were illustrated in a color-coded map. For three-dimensional quantitative analysis, 4 specified contact locations(buccal and lingual cusps of second premolar and molar) were established. For twodimensional quantitative analysis, the sectioning from buccal cusp to lingual cusp of second premolar and molar were acquired depending on the tooth axis. In color-coded map, the biggest difference between intraoral scanning and dual-arch impression was seen (P<.05). In three-dimensional analysis, the biggest difference was seen between intraoral scanning and dual-arch impression and the smallest difference was seen between dual-arch and full-arch impression. The two- and three-dimensional deviations between intraoral scanner and dual-arch impression was bigger than full-arch and dual-arch impression (P<.05). The second premolar showed significantly bigger three-dimensional deviations than the second molar in the three-dimensional deviations (P>.05).

  16. Comparison of intraoral scanning and conventional impression techniques using 3-dimensional superimposition

    PubMed Central

    Rhee, Ye-Kyu

    2015-01-01

    PURPOSE The aim of this study is to evaluate the appropriate impression technique by analyzing the superimposition of 3D digital model for evaluating accuracy of conventional impression technique and digital impression. MATERIALS AND METHODS Twenty-four patients who had no periodontitis or temporomandibular joint disease were selected for analysis. As a reference model, digital impressions with a digital impression system were performed. As a test models, for conventional impression dual-arch and full-arch, impression techniques utilizing addition type polyvinylsiloxane for fabrication of cast were applied. 3D laser scanner is used for scanning the cast. Each 3 pairs for 25 STL datasets were imported into the inspection software. The three-dimensional differences were illustrated in a color-coded map. For three-dimensional quantitative analysis, 4 specified contact locations(buccal and lingual cusps of second premolar and molar) were established. For twodimensional quantitative analysis, the sectioning from buccal cusp to lingual cusp of second premolar and molar were acquired depending on the tooth axis. RESULTS In color-coded map, the biggest difference between intraoral scanning and dual-arch impression was seen (P<.05). In three-dimensional analysis, the biggest difference was seen between intraoral scanning and dual-arch impression and the smallest difference was seen between dual-arch and full-arch impression. CONCLUSION The two- and three-dimensional deviations between intraoral scanner and dual-arch impression was bigger than full-arch and dual-arch impression (P<.05). The second premolar showed significantly bigger three-dimensional deviations than the second molar in the three-dimensional deviations (P>.05). PMID:26816576

  17. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  18. Diffusion maps for high-dimensional single-cell analysis of differentiation data.

    PubMed

    Haghverdi, Laleh; Buettner, Florian; Theis, Fabian J

    2015-09-15

    Single-cell technologies have recently gained popularity in cellular differentiation studies regarding their ability to resolve potential heterogeneities in cell populations. Analyzing such high-dimensional single-cell data has its own statistical and computational challenges. Popular multivariate approaches are based on data normalization, followed by dimension reduction and clustering to identify subgroups. However, in the case of cellular differentiation, we would not expect clear clusters to be present but instead expect the cells to follow continuous branching lineages. Here, we propose the use of diffusion maps to deal with the problem of defining differentiation trajectories. We adapt this method to single-cell data by adequate choice of kernel width and inclusion of uncertainties or missing measurement values, which enables the establishment of a pseudotemporal ordering of single cells in a high-dimensional gene expression space. We expect this output to reflect cell differentiation trajectories, where the data originates from intrinsic diffusion-like dynamics. Starting from a pluripotent stage, cells move smoothly within the transcriptional landscape towards more differentiated states with some stochasticity along their path. We demonstrate the robustness of our method with respect to extrinsic noise (e.g. measurement noise) and sampling density heterogeneities on simulated toy data as well as two single-cell quantitative polymerase chain reaction datasets (i.e. mouse haematopoietic stem cells and mouse embryonic stem cells) and an RNA-Seq data of human pre-implantation embryos. We show that diffusion maps perform considerably better than Principal Component Analysis and are advantageous over other techniques for non-linear dimension reduction such as t-distributed Stochastic Neighbour Embedding for preserving the global structures and pseudotemporal ordering of cells. The Matlab implementation of diffusion maps for single-cell data is available at https://www.helmholtz-muenchen.de/icb/single-cell-diffusion-map. fbuettner.phys@gmail.com, fabian.theis@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Techniques for interpretation of geoid anomalies

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.

    1979-01-01

    For purposes of geological interpretation, techniques are developed to compute directly the geoid anomaly over models of density within the earth. Ideal bodies such as line segments, vertical sheets, and rectangles are first used to calculate the geoid anomaly. Realistic bodies are modeled with formulas for two-dimensional polygons and three-dimensional polyhedra. By using Fourier transform methods the two-dimensional geoid is seen to be a filtered version of the gravity field, in which the long-wavelength components are magnified and the short-wavelength components diminished.

  20. 3D reconstruction techniques made easy: know-how and pictures.

    PubMed

    Luccichenti, Giacomo; Cademartiri, Filippo; Pezzella, Francesca Romana; Runza, Giuseppe; Belgrano, Manuel; Midiri, Massimo; Sabatini, Umberto; Bastianello, Stefano; Krestin, Gabriel P

    2005-10-01

    Three-dimensional reconstructions represent a visual-based tool for illustrating the basis of three-dimensional post-processing such as interpolation, ray-casting, segmentation, percentage classification, gradient calculation, shading and illumination. The knowledge of the optimal scanning and reconstruction parameters facilitates the use of three-dimensional reconstruction techniques in clinical practise. The aim of this article is to explain the principles of multidimensional image processing in a pictorial way and the advantages and limitations of the different possibilities of 3D visualisation.

  1. Real-time Three-dimensional Echocardiography: From Diagnosis to Intervention.

    PubMed

    Orvalho, João S

    2017-09-01

    Echocardiography is one of the most important diagnostic tools in veterinary cardiology, and one of the greatest recent developments is real-time three-dimensional imaging. Real-time three-dimensional echocardiography is a new ultrasonography modality that provides comprehensive views of the cardiac valves and congenital heart defects. The main advantages of this technique, particularly real-time three-dimensional transesophageal echocardiography, are the ability to visualize the catheters, and balloons or other devices, and the ability to image the structure that is undergoing intervention with unprecedented quality. This technique may become one of the main choices for the guidance of interventional cardiology procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A finite element: Boundary integral method for electromagnetic scattering. Ph.D. Thesis Technical Report, Feb. - Sep. 1992

    NASA Technical Reports Server (NTRS)

    Collins, J. D.; Volakis, John L.

    1992-01-01

    A method that combines the finite element and boundary integral techniques for the numerical solution of electromagnetic scattering problems is presented. The finite element method is well known for requiring a low order storage and for its capability to model inhomogeneous structures. Of particular emphasis in this work is the reduction of the storage requirement by terminating the finite element mesh on a boundary in a fashion which renders the boundary integrals in convolutional form. The fast Fourier transform is then used to evaluate these integrals in a conjugate gradient solver, without a need to generate the actual matrix. This method has a marked advantage over traditional integral equation approaches with respect to the storage requirement of highly inhomogeneous structures. Rectangular, circular, and ogival mesh termination boundaries are examined for two-dimensional scattering. In the case of axially symmetric structures, the boundary integral matrix storage is reduced by exploiting matrix symmetries and solving the resulting system via the conjugate gradient method. In each case several results are presented for various scatterers aimed at validating the method and providing an assessment of its capabilities. Important in methods incorporating boundary integral equations is the issue of internal resonance. A method is implemented for their removal, and is shown to be effective in the two-dimensional and three-dimensional applications.

  3. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  4. Novel three-dimensionally ordered macroporous Fe3+-doped TiO2 photocatalysts for H2 production and degradation applications

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoqing; Xue, Chao; Yang, Bolun; Yang, Guidong

    2017-02-01

    Novel three-dimensionally ordered macroporous (3DOM) Fe3+-doped TiO2 photocatalysts were prepared using a colloidal crystal template method with low-cost raw material including ferric trichloride, isopropanol, tetrabutyl titanate and polymethyl methacrylate. The as-prepared 3DOM Fe3+-doped TiO2 photocatalysts were characterized by various analytical techniques. TEM and SEM results showed that the obtained photocatalysts possess well-ordered macroporous structure in three dimensional orientations. As proved by XPS and EDX analysis that Fe3+ ions have been introduced TiO2 lattice and the doped Fe3+ ions can act as the electron acceptor/donor centers to significantly enhance the electron transfer from the bulk to surface of TiO2, resulting in more electrons could take part in the oxygen reduction process thereby decreasing the recombination rate of photogenerated charges. Meanwhile, the 3DOM architecture with the feature of interfacial chemical reaction active sites and optical absorption active sites is remarkably favorable for the reactant transfer and light trapping in the photoreaction process. As a result, the 3DOM Fe3+-doped TiO2 photocatalysts show the considerably higher photocatalytic activity for decomposition of the Rhodamine B (RhB) and the generation of hydrogen under visible light irradiation due to the synergistic effects of open, interconnected macroporous network and metal ion doping.

  5. Rocket launcher: A novel reduction technique for posterior hip dislocations and review of current literature.

    PubMed

    Dan, Michael; Phillips, Alfred; Simonian, Marcus; Flannagan, Scott

    2015-06-01

    We provide a review of literature on reduction techniques for posterior hip dislocations and present our experience with a novel technique for the reduction of acute posterior hip dislocations in the ED, 'the rocket launcher' technique. We present our results with six patients with prosthetic posterior hip dislocation treated in our rural ED. We recorded patient demographics. The technique involves placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, in an ergonomically friendly manner for the reducer. We used Fisher's t-test for cohort analysis between reduction techniques. Of our patients, the mean age was 74 years (range 66 to 85 years). We had a 83% success rate. The one patient who the 'rocket launcher' failed in, was a hemi-arthroplasty patient who also failed all other closed techniques and needed open reduction. When compared with Allis (62% success rate), Whistler (60% success rate) and Captain Morgan (92% success rate) techniques, there was no statistically significant difference in the successfulness of the reduction techniques. There were no neurovascular or periprosthetic complications. We have described a reduction technique for posterior hip dislocations. Placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, thus mechanically and ergonomically superior to standard techniques. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  6. Multimodal, high-dimensional, model-based, Bayesian inverse problems with applications in biomechanics

    NASA Astrophysics Data System (ADS)

    Franck, I. M.; Koutsourelakis, P. S.

    2017-01-01

    This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.

  7. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  8. The Design-To-Cost Manifold

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1990-01-01

    Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.

  9. The Law of Cosines for an "n"-Dimensional Simplex

    ERIC Educational Resources Information Center

    Ding, Yiren

    2008-01-01

    Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.

  10. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  11. Carcinoma of the anal canal: Intensity modulated radiation therapy (IMRT) versus three-dimensional conformal radiation therapy (3DCRT).

    PubMed

    Sale, Charlotte; Moloney, Phillip; Mathlum, Maitham

    2013-12-01

    Patients with anal canal carcinoma treated with standard conformal radiotherapy frequently experience severe acute and late toxicity reactions to the treatment area. Roohipour et al. (Dis Colon Rectum 2008; 51: 147-53) stated a patient's tolerance of chemoradiation to be an important prediction of treatment success. A new intensity modulated radiation therapy (IMRT) technique for anal carcinoma cases has been developed at the Andrew Love Cancer Centre aimed at reducing radiation to surrounding healthy tissue. A same-subject repeated measures design was used for this study, where five anal carcinoma cases at the Andrew Love Cancer Centre were selected. Conformal and IMRT plans were generated and dosimetric evaluations were performed. Each plan was prescribed a total of 54 Gray (Gy) over a course of 30 fractions to the primary site. The IMRT plans resulted in improved dosimetry to the planning target volume (PTV) and reduction in radiation to the critical structures (bladder, external genitalia and femoral heads). Statistically there was no difference between the IMRT and conformal plans in the dose to the small and large bowel; however, the bowel IMRT dose-volume histogram (DVH) doses were consistently lower. The IMRT plans were superior to the conformal plans with improved dose conformity and reduced radiation to the surrounding healthy tissue. Anecdotally it was found that patients tolerated the IMRT treatment better than the three-dimensional (3D) conformal radiation therapy. This study describes and compares the planning techniques.

  12. Carcinoma of the anal canal: Intensity modulated radiation therapy (IMRT) versus three-dimensional conformal radiation therapy (3DCRT)

    PubMed Central

    Sale, Charlotte; Moloney, Phillip; Mathlum, Maitham

    2013-01-01

    Introduction Patients with anal canal carcinoma treated with standard conformal radiotherapy frequently experience severe acute and late toxicity reactions to the treatment area. Roohipour et al. (Dis Colon Rectum 2008; 51: 147–53) stated a patient's tolerance of chemoradiation to be an important prediction of treatment success. A new intensity modulated radiation therapy (IMRT) technique for anal carcinoma cases has been developed at the Andrew Love Cancer Centre aimed at reducing radiation to surrounding healthy tissue. Methods A same-subject repeated measures design was used for this study, where five anal carcinoma cases at the Andrew Love Cancer Centre were selected. Conformal and IMRT plans were generated and dosimetric evaluations were performed. Each plan was prescribed a total of 54 Gray (Gy) over a course of 30 fractions to the primary site. Results The IMRT plans resulted in improved dosimetry to the planning target volume (PTV) and reduction in radiation to the critical structures (bladder, external genitalia and femoral heads). Statistically there was no difference between the IMRT and conformal plans in the dose to the small and large bowel; however, the bowel IMRT dose–volume histogram (DVH) doses were consistently lower. Conclusion The IMRT plans were superior to the conformal plans with improved dose conformity and reduced radiation to the surrounding healthy tissue. Anecdotally it was found that patients tolerated the IMRT treatment better than the three-dimensional (3D) conformal radiation therapy. This study describes and compares the planning techniques. PMID:26229623

  13. Carcinoma of the anal canal: Intensity modulated radiation therapy (IMRT) versus three-dimensional conformal radiation therapy (3DCRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, Charlotte; Moloney, Phillip; Mathlum, Maitham

    Patients with anal canal carcinoma treated with standard conformal radiotherapy frequently experience severe acute and late toxicity reactions to the treatment area. Roohipour et al. (Dis Colon Rectum 2008; 51: 147–53) stated a patient's tolerance of chemoradiation to be an important prediction of treatment success. A new intensity modulated radiation therapy (IMRT) technique for anal carcinoma cases has been developed at the Andrew Love Cancer Centre aimed at reducing radiation to surrounding healthy tissue. A same-subject repeated measures design was used for this study, where five anal carcinoma cases at the Andrew Love Cancer Centre were selected. Conformal and IMRTmore » plans were generated and dosimetric evaluations were performed. Each plan was prescribed a total of 54 Gray (Gy) over a course of 30 fractions to the primary site. The IMRT plans resulted in improved dosimetry to the planning target volume (PTV) and reduction in radiation to the critical structures (bladder, external genitalia and femoral heads). Statistically there was no difference between the IMRT and conformal plans in the dose to the small and large bowel; however, the bowel IMRT dose–volume histogram (DVH) doses were consistently lower. The IMRT plans were superior to the conformal plans with improved dose conformity and reduced radiation to the surrounding healthy tissue. Anecdotally it was found that patients tolerated the IMRT treatment better than the three-dimensional (3D) conformal radiation therapy. This study describes and compares the planning techniques.« less

  14. Simultaneous schlieren photography and soot foil in the study of detonation phenomena

    NASA Astrophysics Data System (ADS)

    Kellenberger, Mark; Ciccarelli, Gaby

    2017-10-01

    The use of schlieren photography has been essential in unravelling the complex nature of high-speed combustion phenomena, but its line-of-sight integration makes it difficult to decisively determine the nature of multi-dimensional combustion wave propagation. Conventional schlieren alone makes it impossible to determine in what plane across the channel an observed structure may exist. To overcome this, a technique of simultaneous high-speed schlieren photography and soot foils was demonstrated that can be applied to the study of detonation phenomena. Using a kerosene lamp, soot was deposited on a glass substrate resulting in a semi-transparent sheet through which schlieren source light could pass. In order to demonstrate the technique, experiments were carried out in mixtures of stoichiometric hydrogen-oxygen at initial pressures between 10 and 15 kPa. Compared to schlieren imaging obtained without a sooted foil, high-speed video results show schlieren images with a small reduction of contrast with density gradients remaining clear. Areas of high temperature cause soot lofted from the foil to incandescence strongly, resulting in the ability to track hot spots and flame location. Post-processing adjustments were demonstrated to make up for camera sensitivity limitations to enable viewing of schlieren density gradients. High-resolution glass soot foils were produced that enable direct coupling of schlieren video to triple-point trajectories seen on the soot foils, allowing for the study of three-dimensional propagation mechanisms of detonation waves.

  15. Jet Mixing Enhancement by Feedback Control

    NASA Technical Reports Server (NTRS)

    Glauser, Mark; Taylor, Jeffrey

    1999-01-01

    The objective of this work has been to produce methodologies for high speed jet noise reduction based on natural mechanisms and enhanced feedback control to affect frequencies and structures in a prescribed manner. In this effort the two-point hot wire measurements obtained in the Langley jet facility by Ukeiley were used in conjuction with linear stochastic estimation (LSE) to implement the LSE component of the complementary technique. This method combines the Proper Orthogonal Decomposition (POD) and LSE to provide an experimental low dimensional time dependent description of the flow field. From such a description it should be possible to identify short time high strain rate events in the jet which contribute to the noise. The main task completed for this effort is summarized: LSE experiments were performed at the downstream locations where the two point hot wire measurements have been obtained by Ukeiley. These experiments involved sampling simultaneously hot wire signals from a relatively course spatial grid in gamma and theta. From this simultaneous data, coupled with the two-point measurements of Ukeiley via the LSE components of the complementary technique, an experimental low dimensional description of the jet at 4, 5, 6, 7 and 8 diameters downstream was obtained for Mach numbers of 0.3 and 0.6. We first present an overview of the theory involved. We finish up with a statement of the work performed and finally provide charts from a 1999 APS talk which summarizes the results.

  16. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  17. Three-dimensional collimation of in-plane-propagating light using silicon micromachined mirror

    NASA Astrophysics Data System (ADS)

    Sabry, Yasser M.; Khalil, Diaa; Saadany, Bassam; Bourouina, Tarik

    2014-03-01

    We demonstrate light collimation of single-mode optical fibers using deeply-etched three-dimensional curved micromirror on silicon chip. The three-dimensional curvature of the mirror is controlled by a process combining deep reactive ion etching and isotropic etching of silicon. The produced surface is astigmatic with out-of-plane radius of curvature that is about one half the in-plane radius of curvature. Having a 300-μm in-plane radius and incident beam inplane inclined with an angle of 45 degrees with respect to the principal axis, the reflected beam is maintained stigmatic with about 4.25 times reduction in the beam expansion angle in free space and about 12-dB reduction in propagation losses, when received by a limited-aperture detector.

  18. A systematic comparison of the closed shoulder reduction techniques.

    PubMed

    Alkaduhimi, H; van der Linde, J A; Willigenburg, N W; van Deurzen, D F P; van den Bekerom, M P J

    2017-05-01

    To identify the optimal technique for closed reduction for shoulder instability, based on success rates, reduction time, complication risks, and pain level. A PubMed and EMBASE query was performed, screening all relevant literature of closed reduction techniques mentioning the success rate written in English, Dutch, German, and Arabic. Studies with a fracture dislocation or lacking information on success rates for closed reduction techniques were excluded. We used the modified Coleman Methodology Score (CMS) to assess the quality of included studies and excluded studies with a poor methodological quality (CMS < 50). Finally, a meta-analysis was performed on the data from all studies combined. 2099 studies were screened for their title and abstract, of which 217 studies were screened full-text and finally 13 studies were included. These studies included 9 randomized controlled trials, 2 retrospective comparative studies, and 2 prospective non-randomized comparative studies. A combined analysis revealed that the scapular manipulation is the most successful (97%), fastest (1.75 min), and least painful reduction technique (VAS 1,47); the "Fast, Reliable, and Safe" (FARES) method also scores high in terms of successful reduction (92%), reduction time (2.24 min), and intra-reduction pain (VAS 1.59); the traction-countertraction technique is highly successful (95%), but slower (6.05 min) and more painful (VAS 4.75). For closed reduction of anterior shoulder dislocations, the combined data from the selected studies indicate that scapular manipulation is the most successful and fastest technique, with the shortest mean hospital stay and least pain during reduction. The FARES method seems the best alternative.

  19. Large Field Photogrammetry Techniques in Aircraft and Spacecraft Impact Testing

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.

    2010-01-01

    The Landing and Impact Research Facility (LandIR) at NASA Langley Research Center is a 240 ft. high A-frame structure which is used for full-scale crash testing of aircraft and rotorcraft vehicles. Because the LandIR provides a unique capability to introduce impact velocities in the forward and vertical directions, it is also serving as the facility for landing tests on full-scale and sub-scale Orion spacecraft mass simulators. Recently, a three-dimensional photogrammetry system was acquired to assist with the gathering of vehicle flight data before, throughout and after the impact. This data provides the basis for the post-test analysis and data reduction. Experimental setups for pendulum swing tests on vehicles having both forward and vertical velocities can extend to 50 x 50 x 50 foot cubes, while weather, vehicle geometry, and other constraints make each experimental setup unique to each test. This paper will discuss the specific calibration techniques for large fields of views, camera and lens selection, data processing, as well as best practice techniques learned from using the large field of view photogrammetry on a multitude of crash and landing test scenarios unique to the LandIR.

  20. Detection and tracking of gas plumes in LWIR hyperspectral video sequence data

    NASA Astrophysics Data System (ADS)

    Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.

    2013-05-01

    Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.

Top