Sample records for high dimensional problems

  1. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  2. High-Fidelity Real-Time Simulation on Deployed Platforms

    DTIC Science & Technology

    2010-08-26

    three–dimensional transient heat conduction “ Swiss Cheese ” problem; and a three–dimensional unsteady incompressible Navier- Stokes low–Reynolds–number...our approach with three examples: a two?dimensional Helmholtz acoustics ?horn? problem; a three?dimensional transient heat conduction ? Swiss Cheese ...solutions; a transient lin- ear heat conduction problem in a three–dimensional “ Swiss Cheese ” configuration Ω — to illustrate treat- ment of many

  3. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  4. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less

  5. Mining High-Dimensional Data

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  6. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  7. Uniform high order spectral methods for one and two dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Shu, Chi-Wang

    1991-01-01

    Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.

  8. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  9. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    NASA Astrophysics Data System (ADS)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  10. Three-dimensional finite element analysis for high velocity impact. [of projectiles from space debris

    NASA Technical Reports Server (NTRS)

    Chan, S. T. K.; Lee, C. H.; Brashears, M. R.

    1975-01-01

    A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.

  11. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  12. Two-and three-dimensional unsteady lift problems in high-speed flight

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard; Heaslet, Max A; Fuller, Franklyn B; Sluder, Loma

    1952-01-01

    The problem of transient lift on two- and three-dimensional wings flying at high speeds is discussed as a boundary-value problem for the classical wave equation. Kirchoff's formula is applied so that the analysis is reduced, just as in the steady state, to an investigation of sources and doublets. The applications include the evaluation of indicial lift and pitching-moment curves for two-dimensional sinking and pitching wings flying at Mach numbers equal to 0, 0.8, 1.0, 1.2 and 2.0. Results for the sinking case are also given for a Mach number of 0.5. In addition, the indicial functions for supersonic-edged triangular wings in both forward and reverse flow are presented and compared with the two-dimensional values.

  13. Computational unsteady aerodynamics for lifting surfaces

    NASA Technical Reports Server (NTRS)

    Edwards, John W.

    1988-01-01

    Two dimensional problems are solved using numerical techniques. Navier-Stokes equations are studied both in the vorticity-stream function formulation which appears to be the optimal choice for two dimensional problems, using a storage approach, and in the velocity pressure formulation which minimizes the number of unknowns in three dimensional problems. Analysis shows that compact centered conservative second order schemes for the vorticity equation are the most robust for high Reynolds number flows. Serious difficulties remain in the choice of turbulent models, to keep reasonable CPU efficiency.

  14. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  15. High-dimensional vector semantics

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    In this paper we explore the “vector semantics” problem from the perspective of “almost orthogonal” property of high-dimensional random vectors. We show that this intriguing property can be used to “memorize” random vectors by simply adding them, and we provide an efficient probabilistic solution to the set membership problem. Also, we discuss several applications to word context vector embeddings, document sentences similarity, and spam filtering.

  16. CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1978-01-01

    CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.

  17. Arbitrarily high-order time-stepping schemes based on the operator spectrum theory for high-dimensional nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Wu, Xinyuan

    2017-07-01

    In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.

  18. A spiking neural network model of model-free reinforcement learning with high-dimensional sensory input and perceptual ambiguity.

    PubMed

    Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji

    2015-01-01

    A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.

  19. A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity

    PubMed Central

    Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji

    2015-01-01

    A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662

  20. Quantum states and optical responses of low-dimensional electron hole systems

    NASA Astrophysics Data System (ADS)

    Ogawa, Tetsuo

    2004-09-01

    Quantum states and their optical responses of low-dimensional electron-hole systems in photoexcited semiconductors and/or metals are reviewed from a theoretical viewpoint, stressing the electron-hole Coulomb interaction, the excitonic effects, the Fermi-surface effects and the dimensionality. Recent progress of theoretical studies is stressed and important problems to be solved are introduced. We cover not only single-exciton problems but also few-exciton and many-exciton problems, including electron-hole plasma situations. Dimensionality of the Wannier exciton is clarified in terms of its linear and nonlinear responses. We also discuss a biexciton system, exciton bosonization technique, high-density degenerate electron-hole systems, gas-liquid phase separation in an excited state and the Fermi-edge singularity due to a Mahan exciton in a low-dimensional metal.

  1. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  2. Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data

    DTIC Science & Technology

    2016-02-01

    Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data Ekaterina Merkurjev*, Andrea...bertozzi@math.ucla.edu, xiaoran@isi.edu, lerman@isi.edu. Abstract Recent advances in clustering have included continuous relaxations of the Cheeger cut ...fully nonlinear Cheeger cut problem, as well as the ratio cut optimization task. Both problems are connected to total variation minimization, and the

  3. Nonlinear Conservation Laws and Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Leveque, Randall J.

    Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References

  4. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  5. Hierarchical Discriminant Analysis.

    PubMed

    Lu, Di; Ding, Chuntao; Xu, Jinliang; Wang, Shangguang

    2018-01-18

    The Internet of Things (IoT) generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification) is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA). It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.

  6. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.

  7. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    PubMed

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  8. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  9. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  10. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  11. High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems

    NASA Technical Reports Server (NTRS)

    Makivic, Miloje S.

    1996-01-01

    This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.

  12. Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions

    NASA Astrophysics Data System (ADS)

    Li, Xiaofei; Lee, Hyundae; Wang, Yuliang

    2017-08-01

    This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.

  13. High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.

  14. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    PubMed Central

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  15. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    PubMed

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. High-resolution two dimensional advective transport

    USGS Publications Warehouse

    Smith, P.E.; Larock, B.E.

    1989-01-01

    The paper describes a two-dimensional high-resolution scheme for advective transport that is based on a Eulerian-Lagrangian method with a flux limiter. The scheme is applied to the problem of pure-advection of a rotated Gaussian hill and shown to preserve the monotonicity property of the governing conservation law.

  17. [Application Progress of Three-dimensional Laser Scanning Technology in Medical Surface Mapping].

    PubMed

    Zhang, Yonghong; Hou, He; Han, Yuchuan; Wang, Ning; Zhang, Ying; Zhu, Xianfeng; Wang, Mingshi

    2016-04-01

    The booming three-dimensional laser scanning technology can efficiently and effectively get spatial three-dimensional coordinates of the detected object surface and reconstruct the image at high speed,high precision and large capacity of information.Non-radiation,non-contact and the ability of visualization make it increasingly popular in three-dimensional surface medical mapping.This paper reviews the applications and developments of three-dimensional laser scanning technology in medical field,especially in stomatology,plastic surgery and orthopedics.Furthermore,the paper also discusses the application prospects in the future as well as the biomedical engineering problems it would encounter with.

  18. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  19. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  20. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  1. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  2. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  3. Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Hathaway, A. W.

    1978-01-01

    Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.

  4. Modal Ring Method for the Scattering of Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal ring method for electromagnetic scattering from perfectly electric conducting (PEC) symmetrical bodies is presented. The scattering body is represented by a line of finite elements (triangular) on its outer surface. The infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The modal ring method effectively reduces the two dimensional scattering problem to a one-dimensional problem similar to the method of moments. The modal element method is capable of handling very high frequency scattering because it has a highly banded solution matrix.

  5. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  6. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    PubMed

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  7. Asymptotics of empirical eigenstructure for high dimensional spiked covariance

    PubMed Central

    Wang, Weichen

    2017-01-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726

  8. High-dimensional cluster analysis with the Masked EM Algorithm

    PubMed Central

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  9. Multimodal, high-dimensional, model-based, Bayesian inverse problems with applications in biomechanics

    NASA Astrophysics Data System (ADS)

    Franck, I. M.; Koutsourelakis, P. S.

    2017-01-01

    This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.

  10. The resistance of an n-dimensional tetrahedron

    NASA Astrophysics Data System (ADS)

    Griffiths, Martin

    2013-01-01

    We consider here a problem that is suitable for introducing high-school students to the notion of generalizing shapes and solids to n dimensions. In particular, we calculate the effective resistance between any two vertices of an n-dimensional tetrahedron whose edges are each 1-Ω resistors. This leads, in a natural way, to more demanding problems, and indeed ideas for more advanced work in this area are also suggested.

  11. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    PubMed

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  12. Free boundary problems in shock reflection/diffraction and related transonic flow problems

    PubMed Central

    Chen, Gui-Qiang; Feldman, Mikhail

    2015-01-01

    Shock waves are steep wavefronts that are fundamental in nature, especially in high-speed fluid flows. When a shock hits an obstacle, or a flying body meets a shock, shock reflection/diffraction phenomena occur. In this paper, we show how several long-standing shock reflection/diffraction problems can be formulated as free boundary problems, discuss some recent progress in developing mathematical ideas, approaches and techniques for solving these problems, and present some further open problems in this direction. In particular, these shock problems include von Neumann's problem for shock reflection–diffraction by two-dimensional wedges with concave corner, Lighthill's problem for shock diffraction by two-dimensional wedges with convex corner, and Prandtl-Meyer's problem for supersonic flow impinging onto solid wedges, which are also fundamental in the mathematical theory of multidimensional conservation laws. PMID:26261363

  13. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  14. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  15. Engineering two-photon high-dimensional states through quantum interference

    PubMed Central

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  16. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  17. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  18. Using Betweenness Centrality to Identify Manifold Shortcuts

    PubMed Central

    Cukierski, William J.; Foran, David J.

    2010-01-01

    High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142

  19. Aerodynamics of an airfoil with a jet issuing from its surface

    NASA Technical Reports Server (NTRS)

    Tavella, D. A.; Karamcheti, K.

    1982-01-01

    A simple, two dimensional, incompressible and inviscid model for the problem posed by a two dimensional wing with a jet issuing from its lower surface is considered and a parametric analysis is carried out to observe how the aerodynamic characteristics depend on the different parameters. The mathematical problem constitutes a boundary value problem where the position of part of the boundary is not known a priori. A nonlinear optimization approach was used to solve the problem, and the analysis reveals interesting characteristics that may help to better understand the physics involved in more complex situations in connection with high lift systems.

  20. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  1. A new Lagrangian method for three-dimensional steady supersonic flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching-Yuen; Liou, Meng-Sing

    1993-01-01

    In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.

  2. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  3. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  4. Simulation and Analysis of Converging Shock Wave Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Shashkov, Mikhail J.

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less

  5. A Comparative Study on Multifactor Dimensionality Reduction Methods for Detecting Gene-Gene Interactions with the Survival Phenotype

    PubMed Central

    Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung

    2015-01-01

    Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630

  6. High-frequency modes in a two-dimensional rectangular room with windows

    NASA Astrophysics Data System (ADS)

    Shabalina, E. D.; Shirgina, N. V.; Shanin, A. V.

    2010-07-01

    We examine a two-dimensional model problem of architectural acoustics on sound propagation in a rectangular room with windows. It is supposed that the walls are ideally flat and hard; the windows absorb all energy that falls upon them. We search for the modes of such a room having minimal attenuation indices, which have the expressed structure of billiard trajectories. The main attenuation mechanism for such modes is diffraction at the edges of the windows. We construct estimates for the attenuation indices of the given modes based on the solution to the Weinstein problem. We formulate diffraction problems similar to the statement of the Weinstein problem that describe the attenuation of billiard modes in complex situations.

  7. Phase-space finite elements in a least-squares solution of the transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less

  8. The Goertler vortex instability mechanism in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Hall, P.

    1984-01-01

    The two dimensional boundary layer on a concave wall is centrifugally unstable with respect to vortices aligned with the basic flow for sufficiently high values of the Goertler number. However, in most situations of practical interest the basic flow is three dimensional and previous theoretical investigations do not apply. The linear stability of the flow over an infinitely long swept wall of variable curvature is considered. If there is no pressure gradient in the boundary layer the instability problem can always be related to an equivalent two dimensional calculation. However, in general, this is not the case and even for small values of the crossflow velocity field dramatic differences between the two and three dimensional problems emerge. When the size of the crossflow is further increased, the vortices in the neutral location have their axes locally perpendicular to the vortex lines of the basic flow.

  9. The relationship between two-dimensional self-esteem and problem solving style in an anorexic inpatient sample.

    PubMed

    Paterson, Gillian; Power, Kevin; Yellowlees, Alex; Park, Katy; Taylor, Louise

    2007-01-01

    Research examining cognitive and behavioural determinants of anorexia is currently lacking. This has implications for the success of treatment programmes for anorexics, particularly, given the high reported dropout rates. This study examines two-dimensional self-esteem (comprising of self-competence and self-liking) and social problem-solving in an anorexic population and predicts that self-esteem will mediate the relationship between problem-solving and eating pathology by facilitating/inhibiting use of faulty/effective strategies. Twenty-seven anorexic inpatients and 62 controls completed measures of social problem solving and two-dimensional self-esteem. Anorexics scored significantly higher than the non-clinical group on measures of eating pathology, negative problem orientation, impulsivity/carelessness and avoidance and significantly lower on positive problem orientation and both self-esteem components. In the clinical sample, disordered eating correlated significantly with self-competence, negative problem-orientation and avoidance. Associations between disordered eating and problem solving lost significance when self-esteem was controlled in the clinical group only. Self-competence was found to be the main predictor of eating pathology in the clinical sample while self-liking, impulsivity and negative and positive problem orientation were main predictors in the non-clinical sample. Findings support the two-dimensional self-esteem theory with self-competence only being relevant to the anorexic population and support the hypothesis that self-esteem mediates the relationship between disordered eating and problem solving ability in an anorexic sample. Treatment implications include support for programmes emphasising increasing self-appraisal and self-efficacy. 2006 John Wiley & Sons, Ltd and Eating Disorders Association

  10. Eight-dimensional methodology for innovative thinking about the case and ethics of the Mount Graham, Large Binocular Telescope project.

    PubMed

    Berne, Rosalyn W; Raviv, Daniel

    2004-04-01

    This paper introduces the Eight Dimensional Methodology for Innovative Thinking (the Eight Dimensional Methodology), for innovative problem solving, as a unified approach to case analysis that builds on comprehensive problem solving knowledge from industry, business, marketing, math, science, engineering, technology, arts, and daily life. It is designed to stimulate innovation by quickly generating unique "out of the box" unexpected and high quality solutions. It gives new insights and thinking strategies to solve everyday problems faced in the workplace, by helping decision makers to see otherwise obscure alternatives and solutions. Daniel Raviv, the engineer who developed the Eight Dimensional Methodology, and paper co-author, technology ethicist Rosalyn Berne, suggest that this tool can be especially useful in identifying solutions and alternatives for particular problems of engineering, and for the ethical challenges which arise with them. First, the Eight Dimensional Methodology helps to elucidate how what may appear to be a basic engineering problem also has ethical dimensions. In addition, it offers to the engineer a methodology for penetrating and seeing new dimensions of those problems. To demonstrate the effectiveness of the Eight Dimensional Methodology as an analytical tool for thinking about ethical challenges to engineering, the paper presents the case of the construction of the Large Binocular Telescope (LBT) on Mount Graham in Arizona. Analysis of the case offers to decision makers the use of the Eight Dimensional Methodology in considering alternative solutions for how they can proceed in their goals of exploring space. It then follows that same process through the second stage of exploring the ethics of each of those different solutions. The LBT project pools resources from an international partnership of universities and research institutes for the construction and maintenance of a highly sophisticated, powerful new telescope. It will soon mark the erection of the world's largest and most powerful optical telescope, designed to see fine detail otherwise visible only from space. It also represents a controversial engineering project that is being undertaken on land considered to be sacred by the local, native Apache people. As presented, the case features the University of Virginia, and its challenges in consideration of whether and how to join the LBT project consortium.

  11. Elitist Binary Wolf Search Algorithm for Heuristic Feature Selection in High-Dimensional Bioinformatics Datasets.

    PubMed

    Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L

    2017-06-28

    Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.

  12. Pairing phase diagram of three holes in the generalized Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Navarro, O.; Espinosa, J.E.

    Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less

  13. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  14. Reconstructing high-dimensional two-photon entangled states via compressive sensing

    PubMed Central

    Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan

    2014-01-01

    Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850

  15. A reduced-order model from high-dimensional frictional hysteresis

    PubMed Central

    Biswas, Saurabh; Chatterjee, Anindya

    2014-01-01

    Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522

  16. Existence and Stability of Compressible Current-Vortex Sheets in Three-Dimensional Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Wang, Ya-Guang

    2008-03-01

    Compressible vortex sheets are fundamental waves, along with shocks and rarefaction waves, in entropy solutions to multidimensional hyperbolic systems of conservation laws. Understanding the behavior of compressible vortex sheets is an important step towards our full understanding of fluid motions and the behavior of entropy solutions. For the Euler equations in two-dimensional gas dynamics, the classical linearized stability analysis on compressible vortex sheets predicts stability when the Mach number M > sqrt{2} and instability when M < sqrt{2} ; and Artola and Majda’s analysis reveals that the nonlinear instability may occur if planar vortex sheets are perturbed by highly oscillatory waves even when M > sqrt{2} . For the Euler equations in three dimensions, every compressible vortex sheet is violently unstable and this instability is the analogue of the Kelvin Helmholtz instability for incompressible fluids. The purpose of this paper is to understand whether compressible vortex sheets in three dimensions, which are unstable in the regime of pure gas dynamics, become stable under the magnetic effect in three-dimensional magnetohydrodynamics (MHD). One of the main features is that the stability problem is equivalent to a free-boundary problem whose free boundary is a characteristic surface, which is more delicate than noncharacteristic free-boundary problems. Another feature is that the linearized problem for current-vortex sheets in MHD does not meet the uniform Kreiss Lopatinskii condition. These features cause additional analytical difficulties and especially prevent a direct use of the standard Picard iteration to the nonlinear problem. In this paper, we develop a nonlinear approach to deal with these difficulties in three-dimensional MHD. We first carefully formulate the linearized problem for the current-vortex sheets to show rigorously that the magnetic effect makes the problem weakly stable and establish energy estimates, especially high-order energy estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.

  17. Solving time-dependent two-dimensional eddy current problems

    NASA Technical Reports Server (NTRS)

    Lee, Min Eig; Hariharan, S. I.; Ida, Nathan

    1988-01-01

    Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.

  18. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  19. Duke Workshop on High-Dimensional Data Sensing and Analysis

    DTIC Science & Technology

    2015-05-06

    Bayesian sparse factor analysis formulation of Chen et al . ( 2011 ) this work develops multi-label PCA (MLPCA), a generative dimension reduction...version of this problem was recently treated by Banerjee et al . [1], Ravikumar et al . [2], Kolar and Xing [3], and Ho ̈fling and Tibshirani [4]. As...Not applicable. Final Report Duke Workshop on High-Dimensional Data Sensing and Analysis Workshop Dates: July 26-28, 2011

  20. An Efficient Variable Screening Method for Effective Surrogate Models for Reliability-Based Design Optimization

    DTIC Science & Technology

    2014-04-01

    surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital

  1. Bayesian propensity scores for high-dimensional causal inference: A comparison of drug-eluting to bare-metal coronary stents.

    PubMed

    Spertus, Jacob V; Normand, Sharon-Lise T

    2018-04-23

    High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    NASA Astrophysics Data System (ADS)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  3. Aerodynamics of Engine-Airframe Interaction

    NASA Technical Reports Server (NTRS)

    Caughey, D. A.

    1986-01-01

    The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.

  4. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  5. Computer aided photographic engineering

    NASA Technical Reports Server (NTRS)

    Hixson, Jeffrey A.; Rieckhoff, Tom

    1988-01-01

    High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.

  6. The initial value problem in Lagrangian drift kinetic theory

    NASA Astrophysics Data System (ADS)

    Burby, J. W.

    2016-06-01

    > Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.

  7. High dimensional feature reduction via projection pursuit

    NASA Technical Reports Server (NTRS)

    Jimenez, Luis; Landgrebe, David

    1994-01-01

    The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.

  8. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    NASA Astrophysics Data System (ADS)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  9. What is the latent structure of alcohol use disorders? A taxometric analysis of the Personality Assessment Inventory Alcohol Problems Scale in male and female prison inmates.

    PubMed

    Walters, Glenn D; Diamond, Pamela M; Magaletta, Philip R

    2010-03-01

    Three indicators derived from the Personality Assessment Inventory (PAI) Alcohol Problems scale (ALC)-tolerance/high consumption, loss of control, and negative social and psychological consequences-were subjected to taxometric analysis-mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)-in 1,374 federal prison inmates (905 males, 469 females). Whereas the total sample yielded ambiguous results, the male subsample produced dimensional results, and the female subsample produced taxonic results. Interpreting these findings in light of previous taxometric research on alcohol abuse and dependence it is speculated that while alcohol use disorders may be taxonic in female offenders, they are probably both taxonic and dimensional in male offenders. Two models of male alcohol use disorder in males are considered, one in which the diagnostic features are categorical and the severity of symptomatology is dimensional, and one in which some diagnostic features (e.g., withdrawal) are taxonic and other features (e.g., social problems) are dimensional.

  10. High dynamic range algorithm based on HSI color space

    NASA Astrophysics Data System (ADS)

    Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming

    2014-10-01

    This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.

  11. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  12. Multi-Dimensional, Non-Pyrolyzing Ablation Test Problems

    NASA Technical Reports Server (NTRS)

    Risch, Tim; Kostyk, Chris

    2016-01-01

    Non-pyrolyzingcarbonaceous materials represent a class of candidate material for hypersonic vehicle components providing both structural and thermal protection system capabilities. Two problems relevant to this technology are presented. The first considers the one-dimensional ablation of a carbon material subject to convective heating. The second considers two-dimensional conduction in a rectangular block subject to radiative heating. Surface thermochemistry for both problems includes finite-rate surface kinetics at low temperatures, diffusion limited ablation at intermediate temperatures, and vaporization at high temperatures. The first problem requires the solution of both the steady-state thermal profile with respect to the ablating surface and the transient thermal history for a one-dimensional ablating planar slab with temperature-dependent material properties. The slab front face is convectively heated and also reradiates to a room temperature environment. The back face is adiabatic. The steady-state temperature profile and steady-state mass loss rate should be predicted. Time-dependent front and back face temperature, surface recession and recession rate along with the final temperature profile should be predicted for the time-dependent solution. The second problem requires the solution for the transient temperature history for an ablating, two-dimensional rectangular solid with anisotropic, temperature-dependent thermal properties. The front face is radiatively heated, convectively cooled, and also reradiates to a room temperature environment. The back face and sidewalls are adiabatic. The solution should include the following 9 items: final surface recession profile, time-dependent temperature history of both the front face and back face at both the centerline and sidewall, as well as the time-dependent surface recession and recession rate on the front face at both the centerline and sidewall. The results of the problems from all submitters will be collected, summarized, and presented at a later conference.

  13. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  14. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  15. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  16. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  17. Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Adamian, A.

    1988-01-01

    An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.

  18. Hyper-spectral image segmentation using spectral clustering with covariance descriptors

    NASA Astrophysics Data System (ADS)

    Kursun, Olcay; Karabiber, Fethullah; Koc, Cemalettin; Bal, Abdullah

    2009-02-01

    Image segmentation is an important and difficult computer vision problem. Hyper-spectral images pose even more difficulty due to their high-dimensionality. Spectral clustering (SC) is a recently popular clustering/segmentation algorithm. In general, SC lifts the data to a high dimensional space, also known as the kernel trick, then derive eigenvectors in this new space, and finally using these new dimensions partition the data into clusters. We demonstrate that SC works efficiently when combined with covariance descriptors that can be used to assess pixelwise similarities rather than in the high-dimensional Euclidean space. We present the formulations and some preliminary results of the proposed hybrid image segmentation method for hyper-spectral images.

  19. An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures

    NASA Technical Reports Server (NTRS)

    Cohl, H.

    1994-01-01

    We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.

  20. Quantum key distribution session with 16-dimensional photonic states.

    PubMed

    Etcheverry, S; Cañas, G; Gómez, E S; Nogueira, W A T; Saavedra, C; Xavier, G B; Lima, G

    2013-01-01

    The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.

  1. Quantum key distribution session with 16-dimensional photonic states

    NASA Astrophysics Data System (ADS)

    Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.

    2013-07-01

    The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.

  2. Quantum key distribution session with 16-dimensional photonic states

    PubMed Central

    Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.

    2013-01-01

    The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD. PMID:23897033

  3. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  4. Chaos and Robustness in a Single Family of Genetic Oscillatory Networks

    PubMed Central

    Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.

    2014-01-01

    Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178

  5. Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised

    USDA-ARS?s Scientific Manuscript database

    In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...

  6. Consensus embedding: theory, algorithms and application to segmentation and classification of biomedical data

    PubMed Central

    2012-01-01

    Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103

  7. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  8. An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Han, Jianqiang; Tang, Huazhong

    2007-01-01

    This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.

  9. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    PubMed

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  10. Sex differences in the behavior of children with the 22q11 deletion syndrome

    PubMed Central

    Sobin, Christina; Kiley-Brabeck, Karen; Monk, Samantha Hadley; Khuri, Jananne; Karayiorgou, Maria

    2009-01-01

    High rates of psychiatric impairment in adults with 22q11DS suggest that behavioral trajectories of children with 22q11DS may provide critical etiologic insights. Past findings that report DSM diagnoses are extremely variable; moreover sex differences in behavior have not yet been examined. Dimensional CBCL ratings from 82 children, including 51 with the 22q11DS and 31 control siblings were analyzed. Strikingly consistent with rates of psychiatric impairment among affected adults, 25% percent of children with 22q11DS had high CBCL scores for Total Impairment, and 20% had high CBCL Internalizing Scale scores. Males accounted for 90% of high Internalizing scores and 67% of high Total Impairment scores. Attention and Social Problems were ubiquitous; more affected males than females (23% vs. 4%) scored high on Thought Problems. With regard to CBCL/DSM overlap, 20% of affected males as compared with 0 affected females had one or more high CBCL ratings in the absence of a DSM diagnosis. Behaviors of children with 22q11DS are characterized by marked sex differences when rated dimensionally, with significantly more males experiencing Internalizing and Thought Problems. Categorical diagnoses do not reflect behavioral differences between male and female children with 22q11DS, and may miss significant behavior problems in 20% of affected males. PMID:19217670

  11. Robust Multigrid Smoothers for Three Dimensional Elliptic Equations with Strong Anisotropies

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Melson, N. Duane

    1998-01-01

    We discuss the behavior of several plane relaxation methods as multigrid smoothers for the solution of a discrete anisotropic elliptic model problem on cell-centered grids. The methods compared are plane Jacobi with damping, plane Jacobi with partial damping, plane Gauss-Seidel, plane zebra Gauss-Seidel, and line Gauss-Seidel. Based on numerical experiments and local mode analysis, we compare the smoothing factor of the different methods in the presence of strong anisotropies. A four-color Gauss-Seidel method is found to have the best numerical and architectural properties of the methods considered in the present work. Although alternating direction plane relaxation schemes are simpler and more robust than other approaches, they are not currently used in industrial and production codes because they require the solution of a two-dimensional problem for each plane in each direction. We verify the theoretical predictions of Thole and Trottenberg that an exact solution of each plane is not necessary and that a single two-dimensional multigrid cycle gives the same result as an exact solution, in much less execution time. Parallelization of the two-dimensional multigrid cycles, the kernel of the three-dimensional implicit solver, is also discussed. Alternating-plane smoothers are found to be highly efficient multigrid smoothers for anisotropic elliptic problems.

  12. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  13. Solution methods for one-dimensional viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, John M.; Simitses, George J.

    1987-01-01

    A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.

  14. Nonclassical models of the theory of plates and shells

    NASA Astrophysics Data System (ADS)

    Annin, Boris D.; Volchkov, Yuri M.

    2017-11-01

    Publications dealing with the study of methods of reducing a three-dimensional problem of the elasticity theory to a two-dimensional problem of the theory of plates and shells are reviewed. Two approaches are considered: the use of kinematic and force hypotheses and expansion of solutions of the three-dimensional elasticity theory in terms of the complete system of functions. Papers where a three-dimensional problem is reduced to a two-dimensional problem with the use of several approximations of each of the unknown functions (stresses and displacements) by segments of the Legendre polynomials are also reviewed.

  15. An Exact, Compressible One-Dimensional Riemann Solver for General, Convex Equations of State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James Russell

    2015-03-05

    This note describes an algorithm with which to compute numerical solutions to the one- dimensional, Cartesian Riemann problem for compressible flow with general, convex equations of state. While high-level descriptions of this approach are to be found in the literature, this note contains most of the necessary details required to write software for this problem. This explanation corresponds to the approach used in the source code that evaluates solutions for the 1D, Cartesian Riemann problem with a JWL equation of state in the ExactPack package [16, 29]. Numerical examples are given with the proposed computational approach for a polytropic equationmore » of state and for the JWL equation of state.« less

  16. Description of a highly symmetric polytope observed in Thomson's problem of charges on a hypersphere

    NASA Astrophysics Data System (ADS)

    Roth, J.

    2007-10-01

    In a recent paper, Altschuler and Pérez-Garrido [Phys. Rev. E 76, 016705 (2007)] have presented a four-dimensional polytope with 80 vertices. We demonstrate how this polytope can be derived from the regular four-dimensional 600-cell with 120 vertices if two orthogonal positive disclinations are created. Some related polytopes are also described.

  17. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  18. A critical assessment of flux and source term closures in shallow water models with porosity for urban flood simulations

    NASA Astrophysics Data System (ADS)

    Guinot, Vincent

    2017-11-01

    The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.

  19. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  20. Computation of three-dimensional shock wave and boundary-layer interactions

    NASA Technical Reports Server (NTRS)

    Hung, C. M.

    1985-01-01

    Computations of the impingement of an oblique shock wave on a cylinder and a supersonic flow past a blunt fin mounted on a plate are used to study three dimensional shock wave and boundary layer interaction. In the impingement case, the problem of imposing a planar impinging shock as an outer boundary condition is discussed and the details of particle traces in windward and leeward symmetry planes and near the body surface are presented. In the blunt fin case, differences between two dimensional and three dimensional separation are discussed, and the existence of an unique high speed, low pressure region under the separated spiral vortex core is demonstrated. The accessibility of three dimensional separation is discussed.

  1. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  2. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  3. Improving Problem-Solving Skills with the Help of Plane-Space Analogies

    ERIC Educational Resources Information Center

    Budai, László

    2013-01-01

    We live our lives in three-dimensional space and encounter geometrical problems (equipment instructions, maps, etc.) every day. Yet there are not sufficient opportunities for high school students to learn geometry. New teaching methods can help remedy this. Specifically our experience indicates that there is great promise for use of geometry…

  4. Function approximation using combined unsupervised and supervised learning.

    PubMed

    Andras, Peter

    2014-03-01

    Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.

  5. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  6. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE PAGES

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...

    2017-10-10

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  7. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less

  8. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.

  9. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range ofmore » physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.« less

  10. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  11. An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.

    PubMed

    Greving, Matthew P

    2011-10-01

    Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.

  12. Guide to the Revised Ground-Water Flow and Heat Transport Simulator: HYDROTHERM - Version 3

    USGS Publications Warehouse

    Kipp, Kenneth L.; Hsieh, Paul A.; Charlton, Scott R.

    2008-01-01

    The HYDROTHERM computer program simulates multi-phase ground-water flow and associated thermal energy transport in three dimensions. It can handle high fluid pressures, up to 1 ? 109 pascals (104 atmospheres), and high temperatures, up to 1,200 degrees Celsius. This report documents the release of Version 3, which includes various additions, modifications, and corrections that have been made to the original simulator. Primary changes to the simulator include: (1) the ability to simulate unconfined ground-water flow, (2) a precipitation-recharge boundary condition, (3) a seepage-surface boundary condition at the land surface, (4) the removal of the limitation that a specified-pressure boundary also have a specified temperature, (5) a new iterative solver for the linear equations based on a generalized minimum-residual method, (6) the ability to use time- or depth-dependent functions for permeability, (7) the conversion of the program code to Fortran 90 to employ dynamic allocation of arrays, and (8) the incorporation of a graphical user interface (GUI) for input and output. The graphical user interface has been developed for defining a simulation, running the HYDROTHERM simulator interactively, and displaying the results. The combination of the graphical user interface and the HYDROTHERM simulator forms the HYDROTHERM INTERACTIVE (HTI) program. HTI can be used for two-dimensional simulations only. New features in Version 3 of the HYDROTHERM simulator have been verified using four test problems. Three problems come from the published literature and one problem was simulated by another partially saturated flow and thermal transport simulator. The test problems include: transient partially saturated vertical infiltration, transient one-dimensional horizontal infiltration, two-dimensional steady-state drainage with a seepage surface, and two-dimensional drainage with coupled heat transport. An example application to a hypothetical stratovolcano system with unconfined ground-water flow is presented in detail. It illustrates the use of HTI with the combination precipitation-recharge and seepage-surface boundary condition, and functions as a tutorial example problem for the new user.

  13. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  14. Mechanism of Flutter A Theoretical and Experimental Investigation of the Flutter Problem

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Garrick, I E

    1940-01-01

    The results of the basic flutter theory originally devised in 1934 and published as NACA Technical Report no. 496 are presented in a simpler and more complete form convenient for further studies. The paper attempts to facilitate the judgement of flutter problems by a systematic survey of the theoretical effects of the various parameters. A large number of experiments were conducted on cantilever wings, with and without ailerons, in the NACA high-speed wind tunnel for the purpose of verifying the theory and to study its adaptability to three-dimensional problems. The experiments included studies on wing taper ratios, nacelles, attached floats, and external bracings. The essential effects in the transition to the three-dimensional problem have been established. Of particular interest is the existence of specific flutter modes as distinguished from ordinary vibration modes. It is shown that there exists a remarkable agreement between theoretical and experimental results.

  15. Electrodeposited three-dimensional Ni-Si nanocable arrays as high performance anodes for lithium ion batteries.

    PubMed

    Liu, Hao; Hu, Liangbin; Meng, Ying Shirley; Li, Quan

    2013-11-07

    A configuration of three-dimensional Ni-Si nanocable array anodes is proposed to overcome the severe volume change problem of Si during the charging-discharging process. In the fabrication process, a simple and low cost electrodeposition is employed to deposit Si instead of the common expansive vapor phase deposition methods. The optimum composite nanocable array electrode achieves a high specific capacity ~1900 mA h g(-1) at 0.05 C. After 100 cycles at 0.5 C, 88% of the initial capacity (~1300 mA h g(-1)) remains, suggesting its good capacity retention ability. The high performance of the composite nanocable electrode is attributed to its excellent adhesion of the active material on the three-dimensional current collector and short ionic/electronic transport pathways during cycling.

  16. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    NASA Astrophysics Data System (ADS)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  17. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  18. The boundary element method applied to 3D magneto-electro-elastic dynamic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.

    2017-11-01

    Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.

  19. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  20. Can compactifications solve the cosmological constant problem?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less

  1. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    NASA Astrophysics Data System (ADS)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  2. High- and low-level hierarchical classification algorithm based on source separation process

    NASA Astrophysics Data System (ADS)

    Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber

    2016-10-01

    High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.

  3. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Weather prediction using a genetic memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.

  5. Decimated Input Ensembles for Improved Generalization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)

    1999-01-01

    Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.

  6. Fuzzy support vector machine for microarray imbalanced data classification

    NASA Astrophysics Data System (ADS)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  7. Synthesis and identification of three-dimensional faces from image(s) and three-dimensional generic models

    NASA Astrophysics Data System (ADS)

    Liu, Zexi; Cohen, Fernand

    2017-11-01

    We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.

  8. Information Gain Based Dimensionality Selection for Classifying Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less

  9. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  10. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  11. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  12. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  13. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  14. A finite element approach for solution of the 3D Euler equations

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.

    1986-01-01

    Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.

  15. Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Alemi, Alex; Sethna, James

    2014-03-01

    Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.

  16. A VERSATILE SHARP INTERFACE IMMERSED BOUNDARY METHOD FOR INCOMPRESSIBLE FLOWS WITH COMPLEX BOUNDARIES

    PubMed Central

    Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.

    2010-01-01

    A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919

  17. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  18. Spatial visualization in physics problem solving.

    PubMed

    Kozhevnikov, Maria; Motes, Michael A; Hegarty, Mary

    2007-07-08

    Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naíve students were administered kinematics problems and spatial visualization ability tests. In Study 2, 17 (8 high- and 9 low-spatial ability) additional students completed think-aloud protocols while they solved the kinematics problems. In Study 3, the eye movements of fifteen (9 high- and 6 low-spatial ability) students were recorded while the students solved kinematics problems. In contrast to high-spatial students, most low-spatial students did not combine two motion vectors, were unable to switch frames of reference, and tended to interpret graphs literally. The results of the study suggest an important relationship between spatial visualization ability and solving kinematics problems with multiple spatial parameters. 2007 Cognitive Science Society, Inc.

  19. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  20. Support Vector Machines for Hyperspectral Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony; Cromp, R. F.

    1998-01-01

    The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.

  1. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    NASA Astrophysics Data System (ADS)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  2. Nonlinear dimensionality reduction of CT histogram based feature space for predicting recurrence-free survival in non-small-cell lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2015-03-01

    Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.

  3. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    PubMed

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  4. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  5. PAGOSA physics manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weseloh, Wayne N.; Clancy, Sean P.; Painter, James W.

    2010-08-01

    PAGOSA is a computational fluid dynamics computer program developed at Los Alamos National Laboratory (LANL) for the study of high-speed compressible flow and high-rate material deformation. PAGOSA is a three-dimensional Eulerian finite difference code, solving problems with a wide variety of equations of state (EOSs), material strength, and explosive modeling options.

  6. ECAT: A New Computerized Tomographic Imaging System for Position-Emitting Radiopharmaceuticals

    DOE R&D Accomplishments Database

    Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Kuhl, D. E.

    1977-01-01

    The ECAT was designed and developed as a complete computerized positron radionuclide imaging system capable of providing high contrast, high resolution, quantitative images in 2 dimensional and tomographic formats. Flexibility, in its various image mode options, allows it to be used for a wide variety of imaging problems.

  7. A second-order accurate kinetic-theory-based method for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, Suresh M.

    1986-01-01

    An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.

  8. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  9. Two-Dimensional Grammars And Their Applications To Artificial Intelligence

    NASA Astrophysics Data System (ADS)

    Lee, Edward T.

    1987-05-01

    During the past several years, the concepts and techniques of two-dimensional grammars1,2 have attracted growing attention as promising avenues of approach to problems in picture generation as well as in picture description3 representation, recognition, transformation and manipulation. Two-dimensional grammar techniques serve the purpose of exploiting the structure or underlying relationships in a picture. This approach attempts to describe a complex picture in terms of their components and their relative positions. This resembles the way a sentence is described in terms of its words and phrases, and the terms structural picture recognition, linguistic picture recognition, or syntactic picture recognition are often used. By using this approach, the problem of picture recognition becomes similar to that of phrase recognition in a language. However, describing pictures using a string grammar (one-dimensional grammar), the only relation between sub-pictures and/or primitives is the concatenation; that is each picture or primitive can be connected only at the left or right. This one-dimensional relation has not been very effective in describing two-dimensional pictures. A natural generaliza-tion is to use two-dimensional grammars. In this paper, two-dimensional grammars and their applications to artificial intelligence are presented. Picture grammars and two-dimensional grammars are introduced and illustrated by examples. In particular, two-dimensional grammars for generating all possible squares and all possible rhombuses are presented. The applications of two-dimensional grammars to solving region filling problems are discussed. An algorithm for region filling using two-dimensional grammars is presented together with illustrative examples. The advantages of using this algorithm in terms of computation time are also stated. A high-level description of a two-level picture generation system is proposed. The first level is the picture primitive generation using two-dimensional grammars. The second level is picture generation using either string description or entity-relationship (ER) diagram description. Illustrative examples are also given. The advantages of ER diagram description together with its comparison to string description are also presented. The results obtained in this paper may have useful applications in artificial intelligence, robotics, expert systems, picture processing, pattern recognition, knowledge engineering and pictorial database design. Furthermore, examples related to satellite surveillance and identifications are also included.

  10. Local Gram-Schmidt and covariant Lyapunov vectors and exponents for three harmonic oscillator problems

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.

    2012-02-01

    We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.

  11. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  12. The Role of Motion Concepts in Understanding Non-Motion Concepts

    PubMed Central

    Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando

    2017-01-01

    This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715

  13. Maximization of Learning Speed Due to Neuronal Redundancy in Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2016-11-01

    Adaptable neural activity contributes to the flexibility of human behavior, which is optimized in situations such as motor learning and decision making. Although learning signals in motor learning and decision making are low-dimensional, neural activity, which is very high dimensional, must be modified to achieve optimal performance based on the low-dimensional signal, resulting in a severe credit-assignment problem. Despite this problem, the human brain contains a vast number of neurons, leaving an open question: what is the functional significance of the huge number of neurons? Here, I address this question by analyzing a redundant neural network with a reinforcement-learning algorithm in which the numbers of neurons and output units are N and M, respectively. Because many combinations of neural activity can generate the same output under the condition of N ≫ M, I refer to the index N - M as neuronal redundancy. Although greater neuronal redundancy makes the credit-assignment problem more severe, I demonstrate that a greater degree of neuronal redundancy facilitates learning speed. Thus, in an apparent contradiction of the credit-assignment problem, I propose the hypothesis that a functional role of a huge number of neurons or a huge degree of neuronal redundancy is to facilitate learning speed.

  14. Lax-Wendroff and TVD finite volume methods for unidimensional thermomechanical numerical simulations of impacts on elastic-plastic solids

    NASA Astrophysics Data System (ADS)

    Heuzé, Thomas

    2017-10-01

    We present in this work two finite volume methods for the simulation of unidimensional impact problems, both for bars and plane waves, on elastic-plastic solid media within the small strain framework. First, an extension of Lax-Wendroff to elastic-plastic constitutive models with linear and nonlinear hardenings is presented. Second, a high order TVD method based on flux-difference splitting [1] and Superbee flux limiter [2] is coupled with an approximate elastic-plastic Riemann solver for nonlinear hardenings, and follows that of Fogarty [3] for linear ones. Thermomechanical coupling is accounted for through dissipation heating and thermal softening, and adiabatic conditions are assumed. This paper essentially focuses on one-dimensional problems since analytical solutions exist or can easily be developed. Accordingly, these two numerical methods are compared to analytical solutions and to the explicit finite element method on test cases involving discontinuous and continuous solutions. This allows to study in more details their respective performance during the loading, unloading and reloading stages. Particular emphasis is also paid to the accuracy of the computed plastic strains, some differences being found according to the numerical method used. Lax-Wendoff two-dimensional discretization of a one-dimensional problem is also appended at the end to demonstrate the extensibility of such numerical scheme to multidimensional problems.

  15. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  16. Fully Coupled Nonlinear Fluid Flow and Poroelasticity in Arbitrarily Fractured Porous Media: A Hybrid-Dimensional Computational Model

    NASA Astrophysics Data System (ADS)

    Jin, L.; Zoback, M. D.

    2017-10-01

    We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.

  17. Joint principal trend analysis for longitudinal high-dimensional data.

    PubMed

    Zhang, Yuping; Ouyang, Zhengqing

    2018-06-01

    We consider a research scenario motivated by integrating multiple sources of information for better knowledge discovery in diverse dynamic biological processes. Given two longitudinal high-dimensional datasets for a group of subjects, we want to extract shared latent trends and identify relevant features. To solve this problem, we present a new statistical method named as joint principal trend analysis (JPTA). We demonstrate the utility of JPTA through simulations and applications to gene expression data of the mammalian cell cycle and longitudinal transcriptional profiling data in response to influenza viral infections. © 2017, The International Biometric Society.

  18. Influence analysis for high-dimensional time series with an application to epileptic seizure onset zone detection

    PubMed Central

    Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred

    2013-01-01

    Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014

  19. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  20. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  1. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  2. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  3. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  4. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  5. A general method for constructing multidimensional molecular potential energy surfaces from {ital ab} {ital initio} calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, T.; Rabitz, H.

    1996-02-01

    A general interpolation method for constructing smooth molecular potential energy surfaces (PES{close_quote}s) from {ital ab} {ital initio} data are proposed within the framework of the reproducing kernel Hilbert space and the inverse problem theory. The general expression for an {ital a} {ital posteriori} error bound of the constructed PES is derived. It is shown that the method yields globally smooth potential energy surfaces that are continuous and possess derivatives up to second order or higher. Moreover, the method is amenable to correct symmetry properties and asymptotic behavior of the molecular system. Finally, the method is generic and can be easilymore » extended from low dimensional problems involving two and three atoms to high dimensional problems involving four or more atoms. Basic properties of the method are illustrated by the construction of a one-dimensional potential energy curve of the He{endash}He van der Waals dimer using the exact quantum Monte Carlo calculations of Anderson {ital et} {ital al}. [J. Chem. Phys. {bold 99}, 345 (1993)], a two-dimensional potential energy surface of the HeCO van der Waals molecule using recent {ital ab} {ital initio} calculations by Tao {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 8680 (1994)], and a three-dimensional potential energy surface of the H{sup +}{sub 3} molecular ion using highly accurate {ital ab} {ital initio} calculations of R{umlt o}hse {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 2231 (1994)]. In the first two cases the constructed potentials clearly exhibit the correct asymptotic forms, while in the last case the constructed potential energy surface is in excellent agreement with that constructed by R{umlt o}hse {ital et} {ital al}. using a low order polynomial fitting procedure. {copyright} {ital 1996 American Institute of Physics.}« less

  6. High-resolution three-dimensional imaging radar

    NASA Technical Reports Server (NTRS)

    Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)

    2010-01-01

    A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.

  7. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  8. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    PubMed Central

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  9. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    PubMed

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  10. Statistical mechanics of complex neural systems and high dimensional data

    NASA Astrophysics Data System (ADS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-03-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.

  11. Physical Simulation for Probabilistic Motion Tracking

    DTIC Science & Technology

    2008-01-01

    learn a low- dimensional embedding of the high-dimensional kinematic data and then attempt to solve the problem in this more man- ageable low...rotations and foot skate ). Such artifacts can be attributed to the general lack of physically plausible priors [2] (that can account for static and/or...temporal priors of the form p(xf+1|xf ) = N (xf + γf ,Σ) (where γf is scaled velocity learned or inferred), have also been proposed [13] and shown to

  12. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  13. Solving the forward problem of magnetoacoustic tomography with magnetic induction by means of the finite element method

    NASA Astrophysics Data System (ADS)

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2009-05-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.

  14. Solving the Forward Problem of Magnetoacoustic Tomography with Magnetic Induction by Means of the Finite Element Method

    PubMed Central

    Li, Xun; Li, Xu; Zhu, Shanan; He, Bin

    2010-01-01

    Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978

  15. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.

  16. A Noniterative Technique for the Direct Implementation of Well Bore Boundary Conditions in Three-Dimensional Heterogeneous Formations

    NASA Astrophysics Data System (ADS)

    Sudicky, E. A.; Unger, A. J. A.; Lacombe, S.

    1995-02-01

    A noniterative algorithm for handling prescribed well bore boundary conditions while pumping or injecting fluid in a three-dimensional heterogeneous aquifer is described. The algorithm is formulated by superimposing conductive one-dimensional line elements representing the well screen onto the three-dimensional matrix elements epresenting the aquifer. Storage in the well casing is also naturally accommodated by the superposition of the line elements. The numerical algorithm is verified by comparison with results obtained from the solution of Papadopulos and Cooper (1967). A large-scale example problem involving groundwater extraction from a partially penetrating pumping well located in a highly heterogeneous confined aquifer is presented to demonstrate the utility of the approach.

  17. [Advances in the research of application of collagen in three-dimensional bioprinting].

    PubMed

    Li, H H; Luo, P F; Sheng, J J; Liu, G C; Zhu, S H

    2016-10-20

    As a new industrial technology with characteristics of high precision and accuracy, the application of three-dimensional bioprinting technology is increasingly wide in the field of medical research. Collagen is one of the most common ingredients in tissue, and it has good biological material properties. There are many reports of using collagen as main composition of " ink" of three-dimensional bioprinting technology. However, the applied collagen is mainly from heterogeneous sources, which may cause some problems in application. Recombinant human source collagen can be obtained from microorganism fermentation by transgenic technology, but more research should be done to confirm its property. This article reviews the advances in the research of collagen and its biological application in three-dimensional bioprinting.

  18. Solution of the two-dimensional spectral factorization problem

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  19. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  20. Are strategies in physics discrete? A remote controlled investigation

    NASA Astrophysics Data System (ADS)

    Heck, Robert; Sherson, Jacob F.; www. scienceathome. org Team; players Team

    2017-04-01

    In science, strategies are formulated based on observations, calculations, or physical insight. For any given physical process, often several distinct strategies are identified. Are these truly distinct or simply low dimensional representations of a high dimensional continuum of solutions? Our online citizen science platform www.scienceathome.org used by more than 150,000 people recently enabled finding solutions to fast, 1D single atom transport [Nature2016]. Surprisingly, player trajectories bunched into discrete solution strategies (clans) yielding clear, distinct physical insight. Introducing the multi-dimensional vector in the direction of other local maxima we locate narrow, high-yield ``bridges'' connecting the clans. This demonstrates for this problem that a continuum of solutions with no clear physical interpretation does in fact exist. Next, four distinct strategies for creating Bose-Einstein condensates were investigated experimentally: hybrid and crossed dipole trap configurations in combination with either large volume or dimple loading from a magnetic trap. We find that although each conventional strategy appears locally optimal, ``bridges'' can be identified. In a novel approach, the problem was gamified allowing 750 citizen scientists to contribute to the experimental optimization yielding nearly a factor two improvement in atom number.

  1. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  2. Children's Strategies for Solving Two- and Three-Dimensional Combinatorial Problems.

    ERIC Educational Resources Information Center

    English, Lyn D.

    1993-01-01

    Investigated strategies that 7- to 12-year-old children (n=96) spontaneously applied in solving novel combinatorial problems. With experience in solving two-dimensional problems, children were able to refine their strategies and adapt them to three dimensions. Results on some problems indicated significant effects of age. (Contains 32 references.)…

  3. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  4. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  5. Uncertainty quantification for complex systems with very high dimensional response using Grassmann manifold variations

    NASA Astrophysics Data System (ADS)

    Giovanis, D. G.; Shields, M. D.

    2018-07-01

    This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.

  6. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  7. A deep learning framework for causal shape transformation.

    PubMed

    Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik

    2018-02-01

    Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less

  9. Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.

    PubMed

    Venturi, D; Karniadakis, G E

    2014-06-08

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.

  10. Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble

    PubMed Central

    Liu, Hang; Chu, Renzhi; Tang, Zhenan

    2015-01-01

    Sensor drift is the most challenging problem in gas sensing at present. We propose a novel two-dimensional classifier ensemble strategy to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. This strategy is appropriate for multi-class classifiers that consist of combinations of pairwise classifiers, such as support vector machines. We compare the performance of the strategy with those of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the two-dimensional ensemble outperforms the other methods considered. Furthermore, we propose a pre-aging process inspired by that applied to the sensors to improve the stability of the classifier ensemble. The experimental results demonstrate that the weight of each multi-class classifier model in the ensemble remains fairly static before and after the addition of new classifier models to the ensemble, when a pre-aging procedure is applied. PMID:25942640

  11. Calculation of flow about posts and powerhead model. [space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Anderson, P. G.; Farmer, R. C.

    1985-01-01

    A three dimensional analysis of the non-uniform flow around the liquid oxygen (LOX) posts in the Space Shuttle Main Engine (SSME) powerhead was performed to determine possible factors contributing to the failure of the posts. Also performed was three dimensional numerical fluid flow analysis of the high pressure fuel turbopump (HPFTP) exhaust system, consisting of the turnaround duct (TAD), two-duct hot gas manifold (HGM), and the Version B transfer ducts. The analysis was conducted in the following manner: (1) modeling the flow around a single and small clusters (2 to 10) of posts; (2) modeling the velocity field in the cross plane; and (3) modeling the entire flow region with a three dimensional network type model. Shear stress functions which will permit viscous analysis without requiring excessive numbers of computational grid points were developed. These wall functions, laminar and turbulent, have been compared to standard Blasius solutions and are directly applicable to the cylinder in cross flow class of problems to which the LOX post problem belongs.

  12. Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems

    PubMed Central

    Venturi, D.; Karniadakis, G. E.

    2014-01-01

    Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519

  13. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.

    1999-10-14

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less

  14. Computational Studies of Strongly Correlated Quantum Matter

    NASA Astrophysics Data System (ADS)

    Shi, Hao

    The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.

  15. Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube

    NASA Astrophysics Data System (ADS)

    Zhou, Guangzhao; Xu, Kun; Liu, Feng

    2018-01-01

    The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.

  16. Teaching the Falling Ball Problem with Dimensional Analysis

    ERIC Educational Resources Information Center

    Sznitman, Josué; Stone, Howard A.; Smits, Alexander J.; Grotberg, James B.

    2013-01-01

    Dimensional analysis is often a subject reserved for students of fluid mechanics. However, the principles of scaling and dimensional analysis are applicable to various physical problems, many of which can be introduced early on in a university physics curriculum. Here, we revisit one of the best-known examples from a first course in classic…

  17. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  18. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.

    This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of themore » hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  19. Fast generation of Fresnel holograms based on multirate filtering.

    PubMed

    Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung

    2009-12-01

    One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.

  20. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  1. Health Information Retrieval Tool (HIRT)

    PubMed Central

    Nyun, Mra Thinzar; Ogunyemi, Omolola; Zeng, Qing

    2002-01-01

    The World Wide Web (WWW) is a powerful way to deliver on-line health information, but one major problem limits its value to consumers: content is highly distributed, while relevant and high quality information is often difficult to find. To address this issue, we experimented with an approach that utilizes three-dimensional anatomic models in conjunction with free-text search.

  2. Comment on "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit".

    PubMed

    Carrillo-Bernal, M A; Núñez-Yépez, H N; Salas-Brito, A L; Solis, Didier A

    2015-02-01

    In the referred paper, the authors use a numerical method for solving ordinary differential equations and a softened Coulomb potential -1/√[x(2)+β(2)] to study the one-dimensional Coulomb problem by approaching the parameter β to zero. We note that even though their numerical findings in the soft potential scenario are correct, their conclusions do not extend to the one-dimensional Coulomb problem (β=0). Their claims regarding the possible existence of an even ground state with energy -∞ with a Dirac-δ eigenfunction and of well-defined parity eigenfunctions in the one-dimensional hydrogen atom are questioned.

  3. Two-dimensional supersonic nonlinear Schrödinger flow past an extended obstacle

    NASA Astrophysics Data System (ADS)

    El, G. A.; Kamchatnov, A. M.; Khodorovskii, V. V.; Annibale, E. S.; Gammal, A.

    2009-10-01

    Supersonic flow of a superfluid past a slender impenetrable macroscopic obstacle is studied in the framework of the two-dimensional (2D) defocusing nonlinear Schrödinger (NLS) equation. This problem is of fundamental importance as a dispersive analog of the corresponding classical gas-dynamics problem. Assuming the oncoming flow speed is sufficiently high, we asymptotically reduce the original boundary-value problem for a steady flow past a slender body to the one-dimensional dispersive piston problem described by the nonstationary NLS equation, in which the role of time is played by the stretched x coordinate and the piston motion curve is defined by the spatial body profile. Two steady oblique spatial dispersive shock waves (DSWs) spreading from the pointed ends of the body are generated in both half planes. These are described analytically by constructing appropriate exact solutions of the Whitham modulation equations for the front DSW and by using a generalized Bohr-Sommerfeld quantization rule for the oblique dark soliton fan in the rear DSW. We propose an extension of the traditional modulation description of DSWs to include the linear “ship-wave” pattern forming outside the nonlinear modulation region of the front DSW. Our analytic results are supported by direct 2D unsteady numerical simulations and are relevant to recent experiments on Bose-Einstein condensates freely expanding past obstacles.

  4. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  5. Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition

    NASA Astrophysics Data System (ADS)

    Magee, Daniel J.; Niemeyer, Kyle E.

    2018-03-01

    The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.

  6. Exact solution of three-dimensional transport problems using one-dimensional models. [in semiconductor devices

    NASA Technical Reports Server (NTRS)

    Misiakos, K.; Lindholm, F. A.

    1986-01-01

    Several parameters of certain three-dimensional semiconductor devices including diodes, transistors, and solar cells can be determined without solving the actual boundary-value problem. The recombination current, transit time, and open-circuit voltage of planar diodes are emphasized here. The resulting analytical expressions enable determination of the surface recombination velocity of shallow planar diodes. The method involves introducing corresponding one-dimensional models having the same values of these parameters.

  7. a Speculative Study on Negative-Dimensional Potential and Wave Problems by Implicit Calculus Modeling Approach

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Wang, Fajie

    Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.

  8. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  9. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  10. Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA

    NASA Astrophysics Data System (ADS)

    Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.

    2018-04-01

    Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.

  11. Numerical solution to the glancing sidewall oblique shock wave/turbulent boundary layer interaction in three dimension

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Benson, T. J.

    1983-01-01

    A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.

  12. Numerical solution to the glancing sidewall oblique shock wave/turbulent boundary layer interaction in three-dimension

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Benson, T. J.

    1983-01-01

    A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.

  13. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  14. Determination of the temperature field of shell structures

    NASA Astrophysics Data System (ADS)

    Rodionov, N. G.

    1986-10-01

    A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.

  15. Thermally induced rarefied gas flow in a three-dimensional enclosure with square cross-section

    NASA Astrophysics Data System (ADS)

    Zhu, Lianhua; Yang, Xiaofan; Guo, Zhaoli

    2017-12-01

    Rarefied gas flow in a three-dimensional enclosure induced by nonuniform temperature distribution is numerically investigated. The enclosure has a square channel-like geometry with alternatively heated closed ends and lateral walls with a linear temperature distribution. A recently proposed implicit discrete velocity method with a memory reduction technique is used to numerically simulate the problem based on the nonlinear Shakhov kinetic equation. The Knudsen number dependencies of the vortices pattern, slip velocity at the planar walls and edges, and heat transfer are investigated. The influences of the temperature ratio imposed at the ends of the enclosure and the geometric aspect ratio are also evaluated. The overall flow pattern shows similarities with those observed in two-dimensional configurations in literature. However, features due to the three-dimensionality are observed with vortices that are not identified in previous studies on similar two-dimensional enclosures at high Knudsen and small aspect ratios.

  16. Mathematical modeling of heat transfer problems in the permafrost

    NASA Astrophysics Data System (ADS)

    Gornov, V. F.; Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.

    2014-11-01

    In this work we present results of numerical simulation of three-dimensional temperature fields in soils for various applied problems: the railway line in the conditions of permafrost for different geometries, the horizontal tunnel underground storage and greenhouses of various designs in the Far North. Mathematical model of the process is described by a nonstationary heat equation with phase transitions of pore water. The numerical realization of the problem is based on the finite element method using a library of scientific computing FEniCS. For numerical calculations we use high-performance computing systems.

  17. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  18. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  19. Research on parallel load sharing principle of piezoelectric six-dimensional heavy force/torque sensor

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Ying-jun; Jia, Zhen-yuan; Zhang, Jun; Qian, Min

    2011-01-01

    In working process of huge heavy-load manipulators, such as the free forging machine, hydraulic die-forging press, forging manipulator, heavy grasping manipulator, large displacement manipulator, measurement of six-dimensional heavy force/torque and real-time force feedback of the operation interface are basis to realize coordinate operation control and force compliance control. It is also an effective way to raise the control accuracy and achieve highly efficient manufacturing. Facing to solve dynamic measurement problem on six-dimensional time-varying heavy load in extremely manufacturing process, the novel principle of parallel load sharing on six-dimensional heavy force/torque is put forward. The measuring principle of six-dimensional force sensor is analyzed, and the spatial model is built and decoupled. The load sharing ratios are analyzed and calculated in vertical and horizontal directions. The mapping relationship between six-dimensional heavy force/torque value to be measured and output force value is built. The finite element model of parallel piezoelectric six-dimensional heavy force/torque sensor is set up, and its static characteristics are analyzed by ANSYS software. The main parameters, which affect load sharing ratio, are analyzed. The experiments for load sharing with different diameters of parallel axis are designed. The results show that the six-dimensional heavy force/torque sensor has good linearity. Non-linearity errors are less than 1%. The parallel axis makes good effect of load sharing. The larger the diameter is, the better the load sharing effect is. The results of experiments are in accordance with the FEM analysis. The sensor has advantages of large measuring range, good linearity, high inherent frequency, and high rigidity. It can be widely used in extreme environments for real-time accurate measurement of six-dimensional time-varying huge loads on manipulators.

  20. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  1. Numerical applications of the advective-diffusive codes for the inner magnetosphere

    NASA Astrophysics Data System (ADS)

    Aseev, N. A.; Shprits, Y. Y.; Drozdov, A. Y.; Kellerman, A. C.

    2016-11-01

    In this study we present analytical solutions for convection and diffusion equations. We gather here the analytical solutions for the one-dimensional convection equation, the two-dimensional convection problem, and the one- and two-dimensional diffusion equations. Using obtained analytical solutions, we test the four-dimensional Versatile Electron Radiation Belt code (the VERB-4D code), which solves the modified Fokker-Planck equation with additional convection terms. The ninth-order upwind numerical scheme for the one-dimensional convection equation shows much more accurate results than the results obtained with the third-order scheme. The universal limiter eliminates unphysical oscillations generated by high-order linear upwind schemes. Decrease in the space step leads to convergence of a numerical solution of the two-dimensional diffusion equation with mixed terms to the analytical solution. We compare the results of the third- and ninth-order schemes applied to magnetospheric convection modeling. The results show significant differences in electron fluxes near geostationary orbit when different numerical schemes are used.

  2. The Griffiss Institute Summer Faculty Program

    DTIC Science & Technology

    2013-05-01

    can inherit the advantages of the static approach while overcoming its drawbacks . Our solution is centered on the following: (i) application-layer web...inverted pendulum balancing problem. In these challenging environments we show that our algorithm not only allows NEAT to scale to high-dimensional spaces

  3. Facile synthesis of tin dioxide-based high performance anodes for lithium ion batteries assisted by graphene gel

    NASA Astrophysics Data System (ADS)

    Wan, Yuanxin; Sha, Ye; Luo, Shaochuan; Deng, Weijia; Wang, Xiaoliang; Xue, Gi; Zhou, Dongshan

    2015-11-01

    Tin dioxide (SnO2) is an attractive material for anodes in energy storage devices, because it has four times the theoretical capacity of the prevalent anode material (graphite). The main obstacle hampers SnO2 from practical application is the pulverization problem caused by drastic volume change (∼300%) during lithium-ion insertion or extraction, which would lead to the loss of electrical conductivity, unstable solid-electrolyte interphase (SEI) formation and consequently severe capacity fading in the cycling. Here, we anchored the SnO2 nanocrystals into three dimensional graphene gel network to tackle this problem. As a result of the three dimensional (3-D) architecture, the huge volume change during cycling was tolerated by the large free space in this 3-D construction, resulting in a high capacity of 1090 mAh g-1 even after 200 cycles. What's more, at a higher current density 5 A g-1, a reversible capacity of about 491 mAh g-1 was achieved with this electrode.

  4. Probabilistic classifiers with high-dimensional data

    PubMed Central

    Kim, Kyung In; Simon, Richard

    2011-01-01

    For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probabilistic classifiers and developed 2 extensions of the Bayesian compound covariate classifier. Based on simulation studies and analysis of gene expression microarray data, we found that proper probabilistic classification is more difficult than deterministic classification. It is important to ensure that a probabilistic classifier is well calibrated or at least not “anticonservative” using the methods developed here. We provide this evaluation for several probabilistic classifiers and also evaluate their refinement as a function of sample size under weak and strong signal conditions. We also present a cross-validation method for evaluating the calibration and refinement of any probabilistic classifier on any data set. PMID:21087946

  5. Global analysis of an impulsive delayed Lotka-Volterra competition system

    NASA Astrophysics Data System (ADS)

    Xia, Yonghui

    2011-03-01

    In this paper, a retarded impulsive n-species Lotka-Volterra competition system with feedback controls is studied. Some sufficient conditions are obtained to guarantee the global exponential stability and global asymptotic stability of a unique equilibrium for such a high-dimensional biological system. The problem considered in this paper is in many aspects more general and incorporates as special cases various problems which have been extensively studied in the literature. Moreover, applying the obtained results to some special cases, I derive some new criteria which generalize and greatly improve some well known results. A method is proposed to investigate biological systems subjected to the effect of both impulses and delays. The method is based on Banach fixed point theory and matrix's spectral theory as well as Lyapunov function. Moreover, some novel analytic techniques are employed to study GAS and GES. It is believed that the method can be extended to other high-dimensional biological systems and complex neural networks. Finally, two examples show the feasibility of the results.

  6. The Position Control of the Surface Motor with the Poles Distribution of Triangular Lattice

    NASA Astrophysics Data System (ADS)

    Watada, Masaya; Katsuyama, Norikazu; Ebihara, Daiki

    Recently, as for the machine tools or industrial robots, high performance, accuracy, etc. are demanded. Generally, when drive of many degrees of freedom is required in the machine tools or industrial robots, it has realized by using two or more motors. For example, two-dimensional positioning stages such as the X-Y plotter or the X-Y stage are enabling the two-dimensional drive by using each one motor in the direction of x, y. In order to use plural motors, these, however, have problems that equipment becomes large and complicate control system. From such problems, the Surface Motor (SFM) that can drive two directions by only one motor is researched. Authors have proposed SFM that considered wide range movement and the application to a curved surface. In this paper, the characteristics of the micro step drive by the open loop control are showed. Introduction of closed loop control for highly accurate positioning, moreover, is examined. The drive characteristics by each control are compared.

  7. Finite element analysis of steady and transiently moving/rolling nonlinear viscoelastic structure. II - Shell and three-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Kennedy, Ronald; Padovan, Joe

    1987-01-01

    In a three-part series of papers, a generalized finite element solution strategy is developed to handle traveling load problems in rolling, moving and rotating structure. The main thrust of this section consists of the development of three-dimensional and shell type moving elements. In conjunction with this work, a compatible three-dimensional contact strategy is also developed. Based on these modeling capabilities, extensive analytical and experimental benchmarking is presented. Such testing includes traveling loads in rotating structure as well as low- and high-speed rolling contact involving standing wave-type response behavior. These point to the excellent modeling capabilities of moving element strategies.

  8. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  9. Modeling change from large-scale high-dimensional spatio-temporal array data

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Pebesma, Edzer

    2014-05-01

    The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?

  10. Thermal History and Mantle Dynamics of Venus

    NASA Technical Reports Server (NTRS)

    Hsui, Albert T.

    1997-01-01

    One objective of this research proposal is to develop a 3-D thermal history model for Venus. The basis of our study is a finite-element computer model to simulate thermal convection of fluids with highly temperature- and pressure-dependent viscosities in a three-dimensional spherical shell. A three-dimensional model for thermal history studies is necessary for the following reasons. To study planetary thermal evolution, one needs to consider global heat budgets of a planet throughout its evolution history. Hence, three-dimensional models are necessary. This is in contrasts to studies of some local phenomena or local structures where models of lower dimensions may be sufficient. There are different approaches to treat three-dimensional thermal convection problems. Each approach has its own advantages and disadvantages. Therefore, the choice of the various approaches is subjective and dependent on the problem addressed. In our case, we are interested in the effects of viscosities that are highly temperature dependent and that their magnitudes within the computing domain can vary over many orders of magnitude. In order to resolve the rapid change of viscosities, small grid spacings are often necessary. To optimize the amount of computing, variable grids become desirable. Thus, the finite-element numerical approach is chosen for its ability to place grid elements of different sizes over the complete computational domain. For this research proposal, we did not start from scratch and develop the finite element codes from the beginning. Instead, we adopted a finite-element model developed by Baumgardner, a collaborator of this research proposal, for three-dimensional thermal convection with constant viscosity. Over the duration supported by this research proposal, a significant amount of advancements have been accomplished.

  11. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  12. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  13. A CFD analysis of blade row interactions within a high-speed axial compressor

    NASA Astrophysics Data System (ADS)

    Richman, Michael Scott

    Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.

  14. Classification of holter registers by dynamic clustering using multi-dimensional particle swarm optimization.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef

    2010-01-01

    In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.

  15. CAFE: A New Relativistic MHD Code

    NASA Astrophysics Data System (ADS)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  16. Penalized gaussian process regression and classification for high-dimensional nonlinear data.

    PubMed

    Yi, G; Shi, J Q; Choi, T

    2011-12-01

    The model based on Gaussian process (GP) prior and a kernel covariance function can be used to fit nonlinear data with multidimensional covariates. It has been used as a flexible nonparametric approach for curve fitting, classification, clustering, and other statistical problems, and has been widely applied to deal with complex nonlinear systems in many different areas particularly in machine learning. However, it is a challenging problem when the model is used for the large-scale data sets and high-dimensional data, for example, for the meat data discussed in this article that have 100 highly correlated covariates. For such data, it suffers from large variance of parameter estimation and high predictive errors, and numerically, it suffers from unstable computation. In this article, penalized likelihood framework will be applied to the model based on GPs. Different penalties will be investigated, and their ability in application given to suit the characteristics of GP models will be discussed. The asymptotic properties will also be discussed with the relevant proofs. Several applications to real biomechanical and bioinformatics data sets will be reported. © 2011, The International Biometric Society No claim to original US government works.

  17. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    PubMed

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  18. Mixing Regimes in a Spatially Confined, Two-Dimensional, Supersonic Shear Layer

    DTIC Science & Technology

    1992-07-31

    MODEL ................................... 3 THE MODEL PROBLEMS .............................................. 6 THE ONE-DIMENSIONAL PROBLEM...the effects of the numerical diffusion on the spectrum. Guirguis et al.ś and Farouk et al."’ have studied spatially evolving mixing layers for equal...approximations. Physical and Numerical Model General Formulation We solve the time-dependent, two-dimensional, compressible, Navier-Stokes equations for a

  19. Finite-dimensional integrable systems: A collection of research problems

    NASA Astrophysics Data System (ADS)

    Bolsinov, A. V.; Izosimov, A. M.; Tsonev, D. M.

    2017-05-01

    This article suggests a series of problems related to various algebraic and geometric aspects of integrability. They reflect some recent developments in the theory of finite-dimensional integrable systems such as bi-Poisson linear algebra, Jordan-Kronecker invariants of finite dimensional Lie algebras, the interplay between singularities of Lagrangian fibrations and compatible Poisson brackets, and new techniques in projective geometry.

  20. Principles for problem aggregation and assignment in medium scale multiprocessors

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.

  1. The importance of spatial ability and mental models in learning anatomy

    NASA Astrophysics Data System (ADS)

    Chatterjee, Allison K.

    As a foundational course in medical education, gross anatomy serves to orient medical and veterinary students to the complex three-dimensional nature of the structures within the body. Understanding such spatial relationships is both fundamental and crucial for achievement in gross anatomy courses, and is essential for success as a practicing professional. Many things contribute to learning spatial relationships; this project focuses on a few key elements: (1) the type of multimedia resources, particularly computer-aided instructional (CAI) resources, medical students used to study and learn; (2) the influence of spatial ability on medical and veterinary students' gross anatomy grades and their mental models; and (3) how medical and veterinary students think about anatomy and describe the features of their mental models to represent what they know about anatomical structures. The use of computer-aided instruction (CAI) by gross anatomy students at Indiana University School of Medicine (IUSM) was assessed through a questionnaire distributed to the regional centers of the IUSM. Students reported using internet browsing, PowerPoint presentation software, and email on a daily bases to study gross anatomy. This study reveals that first-year medical students at the IUSM make limited use of CAI to study gross anatomy. Such studies emphasize the importance of examining students' use of CAI to study gross anatomy prior to development and integration of electronic media into the curriculum and they may be important in future decisions regarding the development of alternative learning resources. In order to determine how students think about anatomical relationships and describe the features of their mental models, personal interviews were conducted with select students based on students' ROT scores. Five typologies of the characteristics of students' mental models were identified and described: spatial thinking, kinesthetic approach, identification of anatomical structures, problem solving strategies, and study methods. Students with different levels of spatial ability visualize and think about anatomy in qualitatively different ways, which is reflected by the features of their mental models. Low spatial ability students thought about and used two-dimensional images from the textbook. They possessed basic two-dimensional models of anatomical structures; they placed emphasis on diagrams and drawings in their studies; and they re-read anatomical problems many times before answering. High spatial ability students thought fully in three-dimensional and imagined rotation and movement of the structures; they made use of many types of images and text as they studied and solved problems. They possessed elaborate three-dimensional models of anatomical structures which they were able to manipulate to solve problems; and they integrated diagrams, drawings, and written text in their studies. Middle spatial ability students were a mix between both low and high spatial ability students. They imagined two-dimensional images popping out of the flat paper to become more three-dimensional, but still relied on drawings and diagrams. Additionally, high spatial ability students used a higher proportion of anatomical terminology than low spatial ability or middle spatial ability students. This provides additional support to the premise that high spatial students' mental models are a complex mixture of imagistic representations and propositional representations that incorporate correct anatomical terminology. Low spatial ability students focused on the function of structures and ways to group information primarily for the purpose of recall. This supports the theory that low spatial students' mental models will be characterized by more on imagistic representations that are general in nature. (Abstract shortened by UMI.)

  2. Direct solution of the H(1s)-H + long-range interaction problem in momentum space

    NASA Astrophysics Data System (ADS)

    Koga, Toshikatsu

    1985-02-01

    Perturbation equations for the H(1s)-H+ long-range interaction are solved directly in momentum space up to the fourth order with respect to the reciprocal of the internuclear distance. As in the hydrogen atom problem, the Fock transformation is used which projects the momentum vector of an electron from the three-dimensional hyperplane onto the four-dimensional hypersphere. Solutions are given as linear combinations of several four-dimensional spherical harmonics. The present results add an example to the momentum-space solution of the nonspherical potential problem.

  3. REVIEWS OF TOPICAL PROBLEMS: Global phase-stable radiointerferometric systems

    NASA Astrophysics Data System (ADS)

    Dravskikh, A. F.; Korol'kov, Dimitrii V.; Pariĭskiĭ, Yu N.; Stotskiĭ, A. A.; Finkel'steĭn, A. M.; Fridman, P. A.

    1981-12-01

    We discuss from a unitary standpoint the possibility of building a phase-stable interferometric system with very long baselines that operate around the clock with real-time data processing. The various problems involved in the realization of this idea are discussed: the methods of suppression of instrumental and tropospheric phase fluctuations, the methods for constructing two-dimensional images and determining the coordinates of radio sources with high angular resolution, and the problem of the optimal structure of the interferometric system. We review in detail the scientific problems from the various branches of natural science (astrophysics, cosmology, geophysics, geodynamics, astrometry, etc.) whose solution requires superhigh angular resolution.

  4. On the theory of oscillating airfoils of finite span in subsonic compressible flow

    NASA Technical Reports Server (NTRS)

    Reissner, Eric

    1950-01-01

    The problem of oscillating lifting surface of finite span in subsonic compressible flow is reduced to an integral equation. The kernel of the integral equation is approximated by a simpler expression, on the basis of the assumption of sufficiently large aspect ratio. With this approximation the double integral occurring in the formulation of the problem is reduced to two single integrals, one of which is taken over the chord and the other over the span of the lifting surface. On the basis of this reduction the three-dimensional problem appears separated into two two-dimensional problems, one of them being effectively the problem of two-dimensional flow and the other being the problem of spanwise circulation distribution. Earlier results concerning the oscillating lifting surface of finite span in incompressible flow are contained in the present more general results.

  5. Extension of modified power method to two-dimensional problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Peng; Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan 44919; Lee, Hyunsuk

    2016-09-01

    In this study, the generalized modified power method was extended to two-dimensional problems. A direct application of the method to two-dimensional problems was shown to be unstable when the number of requested eigenmodes is larger than a certain problem dependent number. The root cause of this instability has been identified as the degeneracy of the transfer matrix. In order to resolve this instability, the number of sub-regions for the transfer matrix was increased to be larger than the number of requested eigenmodes; and a new transfer matrix was introduced accordingly which can be calculated by the least square method. Themore » stability of the new method has been successfully demonstrated with a neutron diffusion eigenvalue problem and the 2D C5G7 benchmark problem. - Graphical abstract:.« less

  6. Large-angle slewing maneuvers for flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Chun, Hon M.; Turner, James D.

    1988-01-01

    A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.

  7. Artificial intelligence in robot control systems

    NASA Astrophysics Data System (ADS)

    Korikov, A.

    2018-05-01

    This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.

  8. Time-delayed feedback technique for suppressing instabilities in time-periodic flow

    NASA Astrophysics Data System (ADS)

    Shaabani-Ardali, Léopold; Sipp, Denis; Lesshafft, Lutz

    2017-11-01

    A numerical method is presented that allows to compute time-periodic flow states, even in the presence of hydrodynamic instabilities. The method is based on filtering nonharmonic components by way of delayed feedback control, as introduced by Pyragas [Phys. Lett. A 170, 421 (1992), 10.1016/0375-9601(92)90745-8]. Its use in flow problems is demonstrated here for the case of a periodically forced laminar jet, subject to a subharmonic instability that gives rise to vortex pairing. The optimal choice of the filter gain, which is a free parameter in the stabilization procedure, is investigated in the context of a low-dimensional model problem, and it is shown that this model predicts well the filter performance in the high-dimensional flow system. Vortex pairing in the jet is efficiently suppressed, so that the unstable periodic flow state in response to harmonic forcing is accurately retrieved. The procedure is straightforward to implement inside any standard flow solver. Memory requirements for the delayed feedback control can be significantly reduced by means of time interpolation between checkpoints. Finally, the method is extended for the treatment of periodic problems where the frequency is not known a priori. This procedure is demonstrated for a three-dimensional cubic lid-driven cavity in supercritical conditions.

  9. One-dimensional high-order compact method for solving Euler's equations

    NASA Astrophysics Data System (ADS)

    Mohamad, M. A. H.; Basri, S.; Basuno, B.

    2012-06-01

    In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.

  10. External Boundary Conditions for Three-Dimensional Problems of Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1997-01-01

    We consider an unbounded steady-state flow of viscous fluid over a three-dimensional finite body or configuration of bodies. For the purpose of solving this flow problem numerically, we discretize the governing equations (Navier-Stokes) on a finite-difference grid. The grid obviously cannot stretch from the body up to infinity, because the number of the discrete variables in that case would not be finite. Therefore, prior to the discretization we truncate the original unbounded flow domain by introducing some artificial computational boundary at a finite distance of the body. Typically, the artificial boundary is introduced in a natural way as the external boundary of the domain covered by the grid. The flow problem formulated only on the finite computational domain rather than on the original infinite domain is clearly subdefinite unless some artificial boundary conditions (ABC's) are specified at the external computational boundary. Similarly, the discretized flow problem is subdefinite (i.e., lacks equations with respect to unknowns) unless a special closing procedure is implemented at this artificial boundary. The closing procedure in the discrete case is called the ABC's as well. In this paper, we present an innovative approach to constructing highly accurate ABC's for three-dimensional flow computations. The approach extends our previous technique developed for the two-dimensional case; it employs the finite-difference counterparts to Calderon's pseudodifferential boundary projections calculated in the framework of the difference potentials method (DPM) by Ryaben'kii. The resulting ABC's appear spatially nonlocal but particularly easy to implement along with the existing solvers. The new boundary conditions have been successfully combined with the NASA-developed production code TLNS3D and used for the analysis of wing-shaped configurations in subsonic (including incompressible limit) and transonic flow regimes. As demonstrated by the computational experiments and comparisons with the standard (local) methods, the DPM-based ABC's allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable increase of the convergence rate of multigrid iterations.

  11. A coupled sharp-interface immersed boundary-finite-element method for flow-structure interaction with application to human phonation.

    PubMed

    Zheng, X; Xue, Q; Mittal, R; Beilamowicz, S

    2010-11-01

    A new flow-structure interaction method is presented, which couples a sharp-interface immersed boundary method flow solver with a finite-element method based solid dynamics solver. The coupled method provides robust and high-fidelity solution for complex flow-structure interaction (FSI) problems such as those involving three-dimensional flow and viscoelastic solids. The FSI solver is used to simulate flow-induced vibrations of the vocal folds during phonation. Both two- and three-dimensional models have been examined and qualitative, as well as quantitative comparisons, have been made with established results in order to validate the solver. The solver is used to study the onset of phonation in a two-dimensional laryngeal model and the dynamics of the glottal jet in a three-dimensional model and results from these studies are also presented.

  12. Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Harris, James Austin; Hix, William Raphael

    Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less

  13. Interior radiances in optically deep absorbing media. I - Exact solutions for one-dimensional model.

    NASA Technical Reports Server (NTRS)

    Kattawar, G. W.; Plass, G. N.

    1973-01-01

    An exact analytic solution to the one-dimensional scattering problem with arbitrary single scattering albedo and arbitrary surface albedo is presented. Expressions are given for the emergent flux from a homogeneous layer, the internal flux within the layer, and the radiative heating. A comparison of these results with the values calculated from the matrix operator theory indicates an exceedingly high accuracy. A detailed study is made of the error in the matrix operator results and its dependence on the accuracy of the starting value.

  14. Compton imaging tomography technique for NDE of large nonuniform structures

    NASA Astrophysics Data System (ADS)

    Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz

    2011-09-01

    In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.

  15. MCDU-8-A Computer Code for One-Dimensional Blast Wave Problems

    DTIC Science & Technology

    1975-07-01

    medium surrounding the explosion is assuned to be air obeying an ideal gas equation of state with a constant specific heat ratio, y2, of 1.4. The...characteristics Explosive blast Pentolite spheres ■ 20.\\ASSTRACT (Continue on reverie eld* II neceeemry end Identify by block number) he method...INVOLVING THE. SUDDEN RELEASE OF A HIGHLY COMPRESSED AIR SPHERE 11 V. A SAMPLE PROBLEM INVOLVING A BLAST WAVE RESULTING FROM THE DETONATION OF A

  16. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  17. Cosmology and the large-mass problem of the five-dimensional Kaluza-Klein theory

    NASA Astrophysics Data System (ADS)

    Lukács, B.; Pacher, T.

    1985-12-01

    It is shown that in five-dimensional Kaluza-Klein theories the large-mass problem leads to circulus vitiosus: the huge recent e2/G value produces the large mass problem, which restricts the ratio e2/Gm2 to the order of unity, in contradiction with the present 1040 value for elementary particles.

  18. Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei

    1992-01-01

    The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.

  19. A fast numerical method for the valuation of American lookback put options

    NASA Astrophysics Data System (ADS)

    Song, Haiming; Zhang, Qi; Zhang, Ran

    2015-10-01

    A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.

  20. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Zachary M.; Wendelberger, Joanne Roth

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less

  1. Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores

    NASA Astrophysics Data System (ADS)

    Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David

    2017-10-01

    Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.

  2. Wave Phenomena in an Acoustic Resonant Chamber

    ERIC Educational Resources Information Center

    Smith, Mary E.; And Others

    1974-01-01

    Discusses the design and operation of a high Q acoustical resonant chamber which can be used to demonstrate wave phenomena such as three-dimensional normal modes, Q values, densities of states, changes in the speed of sound, Fourier decomposition, damped harmonic oscillations, sound-absorbing properties, and perturbation and scattering problems.…

  3. Stanford automatic photogrammetry research

    NASA Technical Reports Server (NTRS)

    Quam, L. H.; Hannah, M. J.

    1974-01-01

    A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.

  4. A Day in the Life

    ERIC Educational Resources Information Center

    Dunn, Tracie

    2009-01-01

    High-school students often tie the definition of art to a two-dimensional surface, obstructing possible solutions to visual problem-solving and restricting creative thinking. In this article, the author describes a project that inspired students to view arts as a social event: installation art. From a contemporary point of view, installation art…

  5. Big Data Goes Personal: Privacy and Social Challenges

    ERIC Educational Resources Information Center

    Bonomi, Luca

    2015-01-01

    The Big Data phenomenon is posing new challenges in our modern society. In addition to requiring information systems to effectively manage high-dimensional and complex data, the privacy and social implications associated with the data collection, data analytics, and service requirements create new important research problems. First, the high…

  6. (N+1)-dimensional fractional reduced differential transform method for fractional order partial differential equations

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad; Lu, Dianchen; Wang, Jun

    2017-07-01

    In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.

  7. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  8. Understanding 3D human torso shape via manifold clustering

    NASA Astrophysics Data System (ADS)

    Li, Sheng; Li, Peng; Fu, Yun

    2013-05-01

    Discovering the variations in human torso shape plays a key role in many design-oriented applications, such as suit designing. With recent advances in 3D surface imaging technologies, people can obtain 3D human torso data that provide more information than traditional measurements. However, how to find different human shapes from 3D torso data is still an open problem. In this paper, we propose to use spectral clustering approach on torso manifold to address this problem. We first represent high-dimensional torso data in a low-dimensional space using manifold learning algorithm. Then the spectral clustering method is performed to get several disjoint clusters. Experimental results show that the clusters discovered by our approach can describe the discrepancies in both genders and human shapes, and our approach achieves better performance than the compared clustering method.

  9. Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Yu, Z.

    The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

  10. Guided eruption of palatally impacted canines through combined use of 3-dimensional computerized tomography scans and the easy cuspid device.

    PubMed

    Caprioglio, Alberto; Siani, Lea; Caprioglio, Claudia

    2007-01-01

    The permanent maxillary canine has a high incidence of impaction. In the clinical treatment of impaction, the first problem is diagnosis and localization. The new diagnostic 3-dimensional systems shown in this article provide valid support in understanding anatomic connections and planning the movements needed for orthodontic correction. Thus, the clinician can reduce the incidence of iatrogenic damage of adjacent structures. This article reviews several biomedical systems for guided eruption of palatally impacted canines and discusses a new device for guided eruption of the surgically disimpacted tooth. This device, called Easy Cuspid, is designed to reduce recognized problems with reaction forces through a simple method. A clinical case of bilateral impaction of the permanent maxillary canines shows the application of the diagnostic method and the biomechanical system, Easy Cuspid.

  11. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  12. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  13. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  14. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  15. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    NASA Astrophysics Data System (ADS)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  16. Energy-efficient spatial-domain-based hybrid multidimensional coded-modulations enabling multi-Tb/s optical transport.

    PubMed

    Djordjevic, Ivan B

    2011-08-15

    In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America

  17. Applications of an exponential finite difference technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.; Keith, T.G. Jr.

    1988-07-01

    An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.

  18. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  19. WFIRST: Microlensing Analysis Data Challenge

    NASA Astrophysics Data System (ADS)

    Street, Rachel; WFIRST Microlensing Science Investigation Team

    2018-01-01

    WFIRST will produce thousands of high cadence, high photometric precision lightcurves of microlensing events, from which a wealth of planetary and stellar systems will be discovered. However, the analysis of such lightcurves has historically been very time consuming and expensive in both labor and computing facilities. This poses a potential bottleneck to deriving the full science potential of the WFIRST mission. To address this problem, the WFIRST Microlensing Science Investigation Team designing a series of data challenges to stimulate research to address outstanding problems of microlensing analysis. These range from the classification and modeling of triple lens events to methods to efficiently yet thoroughly search a high-dimensional parameter space for the best fitting models.

  20. High frequency vibration analysis by the complex envelope vectorization.

    PubMed

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  1. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  2. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  3. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  4. Toward Theoretically Cycling-Stable Lithium-Sulfur Battery Using a Foldable and Compositionally Heterogeneous Cathode.

    PubMed

    Zhong, Lei; Yang, Kai; Guan, Ruiteng; Wang, Liangbin; Wang, Shuanjin; Han, Dongmei; Xiao, Min; Meng, Yuezhong

    2017-12-20

    Rechargeable lithium-sulfur (Li-S) batteries have been expected for new-generation electrical energy storages, which are attributed to their high theoretical energy density, cost effectiveness, and eco-friendliness. But Li-S batteries still have some problems for practical application, such as low sulfur utilization and dissatisfactory capacity retention. Herein, we designed and fabricated a foldable and compositionally heterogeneous three-dimensional sulfur cathode with integrated sandwich structure. The electrical conductivity of the cathode is facilitated by three different dimension carbons, in which short-distance and long-distance pathways for electrons are provided by zero-dimensional ketjen black (KB), one-dimensional activated carbon fiber (ACF) and two-dimensional graphene (G). The resultant three-dimensional sulfur cathode (T-AKG/KB@S) with an areal sulfur loading of 2 mg cm -2 exhibits a high initial specific capacity, superior rate performance and a reversible discharge capacity of up to 726 mAh g -1 at 3.6 mA cm -2 with an inappreciable capacity fading rate of 0.0044% per cycle after 500 cycles. Moreover, the cathode with a high areal sulfur loading of 8 mg cm -2 also delivers a reversible discharge capacity of 938 mAh g -1 at 0.71 mA cm -2 with a capacity fading rate of 0.15% per cycle and a Coulombic efficiency of almost 100% after 50 cycles.

  5. Discovering biclusters in gene expression data based on high-dimensional linear geometries

    PubMed Central

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2008-01-01

    Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477

  6. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  7. 2-dimensional implicit hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Stökl, A.; Dorfi, E. A.

    2007-12-01

    We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.

  8. Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2001-01-01

    A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.

  9. High-Order Methods for Incompressible Fluid Flow

    NASA Astrophysics Data System (ADS)

    Deville, M. O.; Fischer, P. F.; Mund, E. H.

    2002-08-01

    High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.

  10. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  11. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  12. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  13. Unsupervised universal steganalyzer for high-dimensional steganalytic features

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodan; Zhang, Tao

    2016-11-01

    The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.

  14. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    NASA Astrophysics Data System (ADS)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  15. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  16. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  17. Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Lee, Hyunki; Kim, Min Young; Moon, Jeon Il

    2017-12-01

    Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.

  18. Comparative study of high-resolution shock-capturing schemes for a real gas

    NASA Technical Reports Server (NTRS)

    Montagne, J.-L.; Yee, H. C.; Vinokur, M.

    1987-01-01

    Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.

  19. Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence

    NASA Astrophysics Data System (ADS)

    Smith, Stephen Andrew

    1997-11-01

    Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.

  20. Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088

    2007-01-15

    The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.

  1. Boundary shape identification problems in two-dimensional domains related to thermal testing of materials

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kojima, Fumio

    1988-01-01

    The identification of the geometrical structure of the system boundary for a two-dimensional diffusion system is reported. The domain identification problem treated here is converted into an optimization problem based on a fit-to-data criterion and theoretical convergence results for approximate identification techniques are discussed. Results of numerical experiments to demonstrate the efficacy of the theoretical ideas are reported.

  2. A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.

  3. Artificial intelligence and robotics in high throughput post-genomics.

    PubMed

    Laghaee, Aroosha; Malcolm, Chris; Hallam, John; Ghazal, Peter

    2005-09-15

    The shift of post-genomics towards a systems approach has offered an ever-increasing role for artificial intelligence (AI) and robotics. Many disciplines (e.g. engineering, robotics, computer science) bear on the problem of automating the different stages involved in post-genomic research with a view to developing quality assured high-dimensional data. We review some of the latest contributions of AI and robotics to this end and note the limitations arising from the current independent, exploratory way in which specific solutions are being presented for specific problems without regard to how these could be eventually integrated into one comprehensible integrated intelligent system.

  4. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.

  5. Plane Poiseuille flow of a rarefied gas in the presence of strong gravitation.

    PubMed

    Doi, Toshiyuki

    2011-02-01

    Plane Poiseuille flow of a rarefied gas, which flows horizontally in the presence of strong gravitation, is studied based on the Boltzmann equation. Applying the asymptotic analysis for a small variation in the flow direction [Y. Sone, Molecular Gas Dynamics (Birkhäuser, 2007)], the two-dimensional problem is reduced to a one-dimensional problem, as in the case of a Poiseuille flow in the absence of gravitation, and the solution is obtained in a semianalytical form. The reduced one-dimensional problem is solved numerically for a hard sphere molecular gas over a wide range of the gas-rarefaction degree and the gravitational strength. The presence of gravitation reduces the mass flow rate, and the effect of gravitation is significant for large Knudsen numbers. To verify the validity of the asymptotic solution, a two-dimensional problem of a flow through a long channel is directly solved numerically, and the validity of the asymptotic solution is confirmed. ©2011 American Physical Society

  6. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  7. Using sketch-map coordinates to analyze and bias molecular dynamics simulations

    PubMed Central

    Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele

    2012-01-01

    When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357

  8. Measurement of brightness temperature of two-dimensional electron gas in channel of a high electron mobility transistor at ultralow dissipation power

    NASA Astrophysics Data System (ADS)

    Korolev, A. M.; Shulga, V. M.; Turutanov, O. G.; Shnyrkov, V. I.

    2016-07-01

    A technically simple and physically clear method is suggested for direct measurement of the brightness temperature of two-dimensional electron gas (2DEG) in the channel of a high electron mobility transistor (HEMT). The usage of the method was demonstrated with the pseudomorphic HEMT as a specimen. The optimal HEMT dc regime, from the point of view of the "back action" problem, was found to belong to the unsaturated area of the static characteristics possibly corresponding to the ballistic electron transport mode. The proposed method is believed to be a convenient tool to explore the ballistic transport, electron diffusion, 2DEG properties and other electrophysical processes in heterostructures.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Petersson, N. A.; Rodgers, A.

    Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less

  10. Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1998-01-01

    This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

  11. Exotic Quantum Phases and Phase Transitions of Strongly Interacting Electrons in Low-Dimensional Systems

    NASA Astrophysics Data System (ADS)

    Mishmash, Ryan V.

    Experiments on strongly correlated quasi-two-dimensional electronic materials---for example, the high-temperature cuprate superconductors and the putative quantum spin liquids kappa-(BEDT-TTF)2Cu2(CN)3 and EtMe3Sb[Pd(dmit)2]2---routinely reveal highly mysterious quantum behavior which cannot be explained in terms of weakly interacting degrees of freedom. Theoretical progress thus requires the introduction of completely new concepts and machinery beyond the traditional framework of the band theory of solids and its interacting counterpart, Landau's Fermi liquid theory. In full two dimensions, controlled and reliable analytical approaches to such problems are severely lacking, as are numerical simulations of even the simplest of model Hamiltonians due to the infamous fermionic sign problem. Here, we attempt to circumvent some of these difficulties by studying analogous problems in quasi-one dimension. In this lower dimensional setting, theoretical and numerical tractability are on much stronger footing due to the methods of bosonization and the density matrix renormalization group, respectively. Using these techniques, we attack two problems: (1) the Mott transition between a Fermi liquid metal and a quantum spin liquid as potentially directly relevant to the organic compounds kappa-(BEDT-TTF)2Cu 2(CN)3 and EtMe3Sb[Pd(dmit)2] 2 and (2) non-Fermi liquid metals as strongly motivated by the strange metal phase observed in the cuprates. In both cases, we are able to realize highly exotic quantum phases as ground states of reasonable microscopic models. This lends strong credence to respective underlying slave-particle descriptions of the low-energy physics, which are inherently strongly interacting and also unconventional in comparison to weakly interacting alternatives. Finally, working in two dimensions directly, we propose a new slave-particle theory which explains in a universal way many of the intriguing experimental results of the triangular lattice organic spin liquid candidates kappa-(BEDT-TTF) 2Cu2(CN)3 and EtMe3Sb[Pd(dmit) 2]2. With use of large-scale variational Monte Carlo calculations, we show that this new state has very competitive trial energy in an effective spin model thought to describe the essential features of the real materials.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  13. Identification of the Thermal Conductivity Coefficient for Quasi-Stationary Two-Dimensional Heat Conduction Equations

    NASA Astrophysics Data System (ADS)

    Matsevityi, Yu. M.; Alekhina, S. V.; Borukhov, V. T.; Zayats, G. M.; Kostikov, A. O.

    2017-11-01

    The problem of identifying the time-dependent thermal conductivity coefficient in the initial-boundary-value problem for the quasi-stationary two-dimensional heat conduction equation in a bounded cylinder is considered. It is assumed that the temperature field in the cylinder is independent of the angular coordinate. To solve the given problem, which is related to a class of inverse problems, a mathematical approach based on the method of conjugate gradients in a functional form is being developed.

  14. High-Order Methods for Computational Physics

    DTIC Science & Technology

    1999-03-01

    computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear

  15. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  16. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  17. Software for Project-Based Learning of Robot Motion Planning

    ERIC Educational Resources Information Center

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-01-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…

  18. SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *

    PubMed Central

    Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.

    2014-01-01

    The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844

  19. OPTOTRAK: at last a system with resolution of 10 μm (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Crouch, David G.; Kehl, L.; Krist, J. R.

    1990-08-01

    Northern Digital's first active marker point measurement system, the WATSMART, was begun in 1983. Development ended in 1985 with the manufacture of a highly accurate system, which achieved .15 to .25 mm accuracies in three dimensions within a .75-meter cube. Further improvements in accuracy were rendered meaningless, and a great obstacle to usability was presented by a surplus light problem somewhat incorrectly known as "the reflection problem". In 1985, development of a new system to overcome "the reflection problem" was begun. The advantages and disadvantages involved in the use of active versus passive markers were considered. The implications of using a CCD device as the imaging element in a precision measurement device were analyzed, as were device characteristics such as dynamic range, peak readout noise and charge transfer efficiency. A new type of lens was also designed The end result, in 1988, was the first OPTOTRAK system. This system produces three-dimensional data in real-time and is not at all affected by reflections. Accuracies of 30 microns have been achieved in a 1-meter volume. Each two-dimensional camera actually has two separate, one-dimensional, CCD elements and two separate anamorphic lenses. It can locate a point from 1-8 meters away with a resolution of 1 part in 64,000 and an accuracy of 1 part in 20,000 over the field of view.

  20. Directional Statistics for Polarization Observations of Individual Pulses from Radio Pulsars

    NASA Astrophysics Data System (ADS)

    McKinnon, M. M.

    2010-10-01

    Radio polarimetry is a three-dimensional statistical problem. The three-dimensional aspect of the problem arises from the Stokes parameters Q, U, and V, which completely describe the polarization of electromagnetic radiation and conceptually define the orientation of a polarization vector in the Poincaré sphere. The statistical aspect of the problem arises from the random fluctuations in the source-intrinsic polarization and the instrumental noise. A simple model for the polarization of pulsar radio emission has been used to derive the three-dimensional statistics of radio polarimetry. The model is based upon the proposition that the observed polarization is due to the incoherent superposition of two, highly polarized, orthogonal modes. The directional statistics derived from the model follow the Bingham-Mardia and Fisher family of distributions. The model assumptions are supported by the qualitative agreement between the statistics derived from it and those measured with polarization observations of the individual pulses from pulsars. The orthogonal modes are thought to be the natural modes of radio wave propagation in the pulsar magnetosphere. The intensities of the modes become statistically independent when generalized Faraday rotation (GFR) in the magnetosphere causes the difference in their phases to be large. A stochastic version of GFR occurs when fluctuations in the phase difference are also large, and may be responsible for the more complicated polarization patterns observed in pulsar radio emission.

  1. A dimensionally split Cartesian cut cell method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert

    2018-07-01

    We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.

  2. Generalized Centroid Estimators in Bioinformatics

    PubMed Central

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  3. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  4. Impact of high-frequency pumping on anomalous finite-size effects in three-dimensional topological insulators

    NASA Astrophysics Data System (ADS)

    Pervishko, Anastasiia A.; Yudin, Dmitry; Shelykh, Ivan A.

    2018-02-01

    Lowering of the thickness of a thin-film three-dimensional topological insulator down to a few nanometers results in the gap opening in the spectrum of topologically protected two-dimensional surface states. This phenomenon, which is referred to as the anomalous finite-size effect, originates from hybridization between the states propagating along the opposite boundaries. In this work, we consider a bismuth-based topological insulator and show how the coupling to an intense high-frequency linearly polarized pumping can further be used to manipulate the value of a gap. We address this effect within recently proposed Brillouin-Wigner perturbation theory that allows us to map a time-dependent problem into a stationary one. Our analysis reveals that both the gap and the components of the group velocity of the surface states can be tuned in a controllable fashion by adjusting the intensity of the driving field within an experimentally accessible range and demonstrate the effect of light-induced band inversion in the spectrum of the surface states for high enough values of the pump.

  5. Effect of Dimensional Salience and Salience of Variability on Problem Solving: A Developmental Study

    ERIC Educational Resources Information Center

    Zelniker, Tamar; And Others

    1975-01-01

    A matching task was presented to 120 subjects from 6 to 20 years of age to investigate the relative influence of dimensional salience and salience of variability on problem solving. The task included four dimensions: form, color, number, and position. (LLK)

  6. Stirling Analysis Comparison of Commercial vs. High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  7. Stirling Analysis Comparison of Commercial Versus High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  8. A hybrid-stress finite element approach for stress and vibration analysis in linear anisotropic elasticity

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.

    1987-01-01

    A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.

  9. Aerodynamic interaction between vortical wakes and the viscous flow about a circular cylinder

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1985-01-01

    In the design analysis of conventional aircraft configurations, the prediction of the strong interaction between vortical wakes and the viscous flow field about bodies is of considerable importance. Interactions between vortical wakes and aircraft components are even more common on rotorcraft and configurations with lifting surfaces forward of the wing. An accurate analysis of the vortex-wake interaction with aircraft components is needed for the optimization of the payload and the reduction of vibratory loads. However, the three-dimensional flow field beneath the rotor disk and the interaction of the rotor wake with solid bodies in the flow field are highly complex. The present paper has the objective to provide a basis for the considered interactions by studying a simpler problem. This problem involves the two-dimensional interaction of external wakes with the viscous flow about a circular cylinder.

  10. Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression

    PubMed Central

    Shimizu, Yu; Yoshimoto, Junichiro; Takamura, Masahiro; Okada, Go; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    In diagnostic applications of statistical machine learning methods to brain imaging data, common problems include data high-dimensionality and co-linearity, which often cause over-fitting and instability. To overcome these problems, we applied partial least squares (PLS) regression to resting-state functional magnetic resonance imaging (rs-fMRI) data, creating a low-dimensional representation that relates symptoms to brain activity and that predicts clinical measures. Our experimental results, based upon data from clinically depressed patients and healthy controls, demonstrated that PLS and its kernel variants provided significantly better prediction of clinical measures than ordinary linear regression. Subsequent classification using predicted clinical scores distinguished depressed patients from healthy controls with 80% accuracy. Moreover, loading vectors for latent variables enabled us to identify brain regions relevant to depression, including the default mode network, the right superior frontal gyrus, and the superior motor area. PMID:28700672

  11. A one-dimensional nonlinear problem of thermoelasticity in extended thermodynamics

    NASA Astrophysics Data System (ADS)

    Rawy, E. K.

    2018-06-01

    We solve a nonlinear, one-dimensional initial boundary-value problem of thermoelasticity in generalized thermodynamics. A Cattaneo-type evolution equation for the heat flux is used, which differs from the one used extensively in the literature. The hyperbolic nature of the associated linear system is clarified through a study of the characteristic curves. Progressive wave solutions with two finite speeds are noted. A numerical treatment is presented for the nonlinear system using a three-step, quasi-linearization, iterative finite-difference scheme for which the linear system of equations is the initial step in the iteration. The obtained results are discussed in detail. They clearly show the hyperbolic nature of the system, and may be of interest in investigating thermoelastic materials, not only at low temperatures, but also during high temperature processes involving rapid changes in temperature as in laser treatment of surfaces.

  12. The program FANS-3D (finite analytic numerical simulation 3-dimensional) and its applications

    NASA Technical Reports Server (NTRS)

    Bravo, Ramiro H.; Chen, Ching-Jen

    1992-01-01

    In this study, the program named FANS-3D (Finite Analytic Numerical Simulation-3 Dimensional) is presented. FANS-3D was designed to solve problems of incompressible fluid flow and combined modes of heat transfer. It solves problems with conduction and convection modes of heat transfer in laminar flow, with provisions for radiation and turbulent flows. It can solve singular or conjugate modes of heat transfer. It also solves problems in natural convection, using the Boussinesq approximation. FANS-3D was designed to solve heat transfer problems inside one, two and three dimensional geometries that can be represented by orthogonal planes in a Cartesian coordinate system. It can solve internal and external flows using appropriate boundary conditions such as symmetric, periodic and user specified.

  13. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  14. Global solvability and asymptotic behavior of a free boundary problem for the one-dimensional viscous radiative and reactive gas

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zheng, Songmu

    2012-12-01

    In this paper, we study a Neumann and free boundary problem for the one-dimensional viscous radiative and reactive gas. We prove that under rather general assumptions on the heat conductivity κ, for any arbitrary large smooth initial data, the problem admits a unique global classical solution. Our global existence results improve those results by Umehara and Tani ["Global solution to the one-dimensional equations for a self-gravitating viscous radiative and reactive gas," J. Differ. Equations 234(2), 439-463 (2007), 10.1016/j.jde.2006.09.023; Umehara and Tani "Global solvability of the free-boundary problem for one-dimensional motion of a self-gravitating viscous radiative and reactive gas," Proc. Jpn. Acad., Ser. A: Math. Sci. 84(7), 123-128 (2008)], 10.3792/pjaa.84.123 and by Qin, Hu, and Wang ["Global smooth solutions for the compressible viscous and heat-conductive gas," Q. Appl. Math. 69(3), 509-528 (2011)]., 10.1090/S0033-569X-2011-01218-0 Moreover, we analyze the asymptotic behavior of the global solutions to our problem, and we prove that the global solution will converge to an equilibrium as time goes to infinity. This is the result obtained for this problem in the literature for the first time.

  15. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  16. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Spatial model of the gecko foot hair: functional significance of highly specialized non-uniform geometry.

    PubMed

    Filippov, Alexander E; Gorb, Stanislav N

    2015-02-06

    One of the important problems appearing in experimental realizations of artificial adhesives inspired by gecko foot hair is so-called clusterization. If an artificially produced structure is flexible enough to allow efficient contact with natural rough surfaces, after a few attachment-detachment cycles, the fibres of the structure tend to adhere one to another and form clusters. Normally, such clusters are much larger than original fibres and, because they are less flexible, form much worse adhesive contacts especially with the rough surfaces. Main problem here is that the forces responsible for the clusterization are the same intermolecular forces which attract fibres to fractal surface of the substrate. However, arrays of real gecko setae are much less susceptible to this problem. One of the possible reasons for this is that ends of the seta have more sophisticated non-uniformly distributed three-dimensional structure than that of existing artificial systems. In this paper, we simulated three-dimensional spatial geometry of non-uniformly distributed branches of nanofibres of the setal tip numerically, studied its attachment-detachment dynamics and discussed its advantages versus uniformly distributed geometry.

  18. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  19. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  20. The point explosion with radiation transport

    NASA Astrophysics Data System (ADS)

    Lin, Zhiwei; Zhang, Lu; Kuang, Longyu; Jiang, Shaoen

    2017-10-01

    Some amount of energy is released instantaneously at the origin to generate simultaneously a spherical radiative heat wave and a spherical shock wave in the point explosion with radiation transport, which is a complicated problem due to the competition between these two waves. The point explosion problem possesses self-similar solutions when only hydrodynamic motion or only heat conduction is considered, which are Sedov solution and Barenblatt solution respectively. The point explosion problem wherein both physical mechanisms of hydrodynamic motion and heat conduction are included has been studied by P. Reinicke and A.I. Shestakov. In this talk we numerically investigate the point explosion problem wherein both physical mechanisms of hydrodynamic motion and radiation transport are taken into account. The radiation transport equation in one dimensional spherical geometry has to be solved for this problem since the ambient medium is optically thin with respect to the initially extremely high temperature at the origin. The numerical results reveal a high compression of medium and a bi-peak structure of density, which are further theoretically analyzed at the end.

  1. Flow simulations about steady-complex and unsteady moving configurations using structured-overlapped and unstructured grids

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1995-01-01

    The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.

  2. Manufactured solutions for the three-dimensional Euler equations with relevance to Inertial Confinement Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waltz, J., E-mail: jwaltz@lanl.gov; Canfield, T.R.; Morgan, N.R.

    2014-06-15

    We present a set of manufactured solutions for the three-dimensional (3D) Euler equations. The purpose of these solutions is to allow for code verification against true 3D flows with physical relevance, as opposed to 3D simulations of lower-dimensional problems or manufactured solutions that lack physical relevance. Of particular interest are solutions with relevance to Inertial Confinement Fusion (ICF) capsules. While ICF capsules are designed for spherical symmetry, they are hypothesized to become highly 3D at late time due to phenomena such as Rayleigh–Taylor instability, drive asymmetry, and vortex decay. ICF capsules also involve highly nonlinear coupling between the fluid dynamicsmore » and other physics, such as radiation transport and thermonuclear fusion. The manufactured solutions we present are specifically designed to test the terms and couplings in the Euler equations that are relevant to these phenomena. Example numerical results generated with a 3D Finite Element hydrodynamics code are presented, including mesh convergence studies.« less

  3. Photo-Attachment of Biomolecules for Miniaturization on Wicking Si-Nanowire Platform

    PubMed Central

    Cheng, He; Zheng, Han; Wu, Jia Xin; Xu, Wei; Zhou, Lihan; Leong, Kam Chew; Fitzgerald, Eugene; Rajagopalan, Raj; Too, Heng Phon; Choi, Wee Kiong

    2015-01-01

    We demonstrated the surface functionalization of a highly three-dimensional, superhydrophilic wicking substrate using light to immobilize functional biomolecules for sensor or microarray applications. We showed here that the three-dimensional substrate was compatible with photo-attachment and the performance of functionalization was greatly improved due to both increased surface capacity and reduced substrate reflectivity. In addition, photo-attachment circumvents the problems induced by wicking effect that was typically encountered on superhydrophilic three-dimensional substrates, thus reducing the difficulty of producing miniaturized sites on such substrate. We have investigated various aspects of photo-attachment process on the nanowire substrate, including the role of different buffers, the effect of wavelength as well as how changing probe structure may affect the functionalization process. We demonstrated that substrate fabrication and functionalization can be achieved with processes compatible with microelectronics processes, hence reducing the cost of array fabrication. Such functionalization method coupled with the high capacity surface makes the substrate an ideal candidate for sensor or microarray for sensitive detection of target analytes. PMID:25689680

  4. Classification Objects, Ideal Observers & Generative Models

    ERIC Educational Resources Information Center

    Olman, Cheryl; Kersten, Daniel

    2004-01-01

    A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…

  5. Parallel solution of sparse one-dimensional dynamic programming problems

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1989-01-01

    Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.

  6. Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics

    DOE PAGES

    Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.; ...

    2016-04-27

    We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  7. Preparation of wholemount mouse intestine for high-resolution three-dimensional imaging using two-photon microscopy.

    PubMed

    Appleton, P L; Quyn, A J; Swift, S; Näthke, I

    2009-05-01

    Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of

  8. Multi-Material Closure Model for High-Order Finite Element Lagrangian Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Kolev, T. V.; Rieben, R. N.

    We present a new closure model for single fluid, multi-material Lagrangian hydrodynamics and its application to high-order finite element discretizations of these equations [1]. The model is general with respect to the number of materials, dimension and space and time discretizations. Knowledge about exact material interfaces is not required. Material indicator functions are evolved by a closure computation at each quadrature point of mixed cells, which can be viewed as a high-order variational generalization of the method of Tipton [2]. This computation is defined by the notion of partial non-instantaneous pressure equilibration, while the full pressure equilibration is achieved bymore » both the closure model and the hydrodynamic motion. Exchange of internal energy between materials is derived through entropy considerations, that is, every material produces positive entropy, and the total entropy production is maximized in compression and minimized in expansion. Results are presented for standard one-dimensional two-material problems, followed by two-dimensional and three-dimensional multi-material high-velocity impact arbitrary Lagrangian–Eulerian calculations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  9. Multi-Dimensional, Inviscid Flux Reconstruction for Simulation of Hypersonic Heating on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2009-01-01

    The quality of simulated hypersonic stagnation region heating on tetrahedral meshes is investigated by using a three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. Two test problems are investigated: hypersonic flow over a three-dimensional cylinder with special attention to the uniformity of the solution in the spanwise direction and hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problem provides a sensitive test for algorithmic effects on heating. This investigation is believed to be unique in its focus on three-dimensional, rotated upwind schemes for the simulation of hypersonic heating on tetrahedral grids. This study attempts to fill the void left by the inability of conventional (quasi-one-dimensional) approaches to accurately simulate heating in a tetrahedral grid system. Results show significant improvement in spanwise uniformity of heating with some penalty of ringing at the captured shock. Issues with accuracy near the peak shear location are identified and require further study.

  10. Does Anxiety Modify the Risk for, or Severity of, Conduct Problems Among Children With Co-Occurring ADHD: Categorical and Dimensional and Analyses.

    PubMed

    Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F

    2017-08-01

    The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.

  11. The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations

    NASA Astrophysics Data System (ADS)

    Degen, D.; Veroy, K.; Wellmann, F.

    2017-12-01

    Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.

  12. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  13. Numerical Solution of Optimal Control Problem under SPDE Constraints

    DTIC Science & Technology

    2011-10-14

    Faure and Sobol sequences are used to evaluate high dimensional integrals, and the errors in the numerical results for over 30 dimensions become quite...sequence; right: 1000 points of dimension 26 and 27 projection for optimal Kronecker sequence. benchmark Faure and Sobol methods. 2.2 High order...J. Goodman and J. O’Rourke, Handbook of discrete and computational geome- try, CRC Press, Inc., (2004). [5] S. Joe and F. Kuo, Constructing Sobol

  14. High-Order Moving Overlapping Grid Methodology in a Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Merrill, Brandon E.

    A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies, show near linear strong scaling, even for moderately large processor counts. The moving overlapping mesh methodology is utilized to investigate the effect of an upstream turbulent wake on a three-dimensional oscillating NACA0012 extruded airfoil. A direct numerical simulation (DNS) at Reynolds Number 44,000 is performed for steady inflow incident upon the airfoil oscillating between angle of attack 5.6° and 25° with reduced frequency k=0.16. Results are contrasted with subsequent DNS of the same oscillating airfoil in a turbulent wake generated by a stationary upstream cylinder.

  15. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through themore » electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank« less

  16. Universal approximators for multi-objective direct policy search in water reservoir management problems: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca

    2014-05-01

    The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.

  17. A High-Performance Parallel Implementation of the Certified Reduced Basis Method

    DTIC Science & Technology

    2010-12-15

    point of view of model reduction due to the “curse of dimensionality”. We consider transient thermal conduction in a three– dimensional “ Swiss cheese ... Swiss cheese ” problem (see Figure 7a) there are 54 unique ordered pairs in I. A histogram of 〈δµ〉 values computed for the ntrain = 106 case is given in...our primal-dual RB method yields a very fast and accurate output approxima- tion for the “ Swiss Cheese ” problem. Our goal in this final subsection is

  18. Bayesian linkage and segregation analysis: factoring the problem.

    PubMed

    Matthysse, S

    2000-01-01

    Complex segregation analysis and linkage methods are mathematical techniques for the genetic dissection of complex diseases. They are used to delineate complex modes of familial transmission and to localize putative disease susceptibility loci to specific chromosomal locations. The computational problem of Bayesian linkage and segregation analysis is one of integration in high-dimensional spaces. In this paper, three available techniques for Bayesian linkage and segregation analysis are discussed: Markov Chain Monte Carlo (MCMC), importance sampling, and exact calculation. The contribution of each to the overall integration will be explicitly discussed.

  19. Direct coupling of tomography and ptychography

    DOE PAGES

    Gürsoy, Doğa

    2017-08-09

    We present a generalization of the ptychographic phase problem for recovering refractive properties of a three-dimensional object in a tomography setting. Our approach, which ignores the lateral overlapping probe requirements in existing ptychography algorithms, can enable the reconstruction of objects using highly flexible acquisition patterns and pave the way for sparse and rapid data collection with lower radiation exposure.

  20. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1992-01-01

    Presented is a collection of papers on research activities carried out during the funding period of October 1991 to March 1992. Topics covered include: blunt body flows in thermochemical equilibrium; thermochemical relaxation in high enthalpy nozzle flow; single expansion ramp nozzle simulations; lunar return aerobraking; line boundary problem for three dimensional grids; and unsteady shock induced combustion.

  1. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  2. Research on the Countermeasures for High-end Talent Development in the New Material Industry from the Perspective of Four-dimensional Subject-With Hunan Province as an Example

    NASA Astrophysics Data System (ADS)

    Wen, Qiong

    2018-03-01

    In the context of the increasingly severe international economic situation, the new material industry is as one of the seven strategic emerging industries, and its development has become a major strategic decision of China that should be insisted at present and in the future. The implementation of this strategic decision cannot be achieved without talents. Based on the actual situation of Hunan Province, this paper points out the four major problems in high-end talent development of Hunan Province, namely, immaturity of industry development, unreasonable talent structure, imperfect training mechanism and unscientific incentive measures, and purposes the countermeasures in the perspective of four-dimensional subject involving government, enterprises, schools and students.

  3. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  4. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1982-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.

  5. Optimization and uncertainty assessment of strongly nonlinear groundwater models with high parameter dimensionality

    NASA Astrophysics Data System (ADS)

    Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun

    2010-10-01

    Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.

  6. A new procedure for investigating three-dimensional stress fields in a thin plate with a through-the-thickness crack

    NASA Astrophysics Data System (ADS)

    Yi, Dake; Wang, TzuChiang

    2018-06-01

    In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.

  7. Simple model of the indirect compression of targets under conditions close to the national ignition facility at an energy of 1.5 MJ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru

    2015-11-15

    The possibility of the analysis and interpretation of the reported experiments with the megajoule National Ignition Facility (NIF) laser on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry has been studied. The problem of the energy balance in a target and the determination of the laser energy that should be used in the spherical model of the target has been considered. The results of action of pulses differing in energy and time profile (“low-foot” and “high-foot” regimes) have been analyzed. The parameters of the compression of targets with a high-densitymore » carbon ablator have been obtained. The results of the simulations are in satisfactory agreement with the measurements and correspond to the range of the observed parameters. The set of compared results can be expanded, in particular, for a more detailed determination of the parameters of a target near the maximum compression of the capsule. The physical foundation of the possibility of using the one-dimensional description is the necessity of the closeness of the last stage of the compression of the capsule to a one-dimensional process. The one-dimensional simulation of the compression of the capsule can be useful in establishing the boundary behind which two-dimensional and three-dimensional simulation should be used.« less

  8. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schrödinger problem and the KPI equation

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.

    1992-11-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.

  9. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  10. Developing an NGSS Pedagogy for Climate Literacy and Energy Awareness Using the CLEAN Collection

    NASA Astrophysics Data System (ADS)

    Manning, C. L. B.; Taylor, J.; Oonk, D.; Sullivan, S. M.; Kirk, K.; Niepold, F., III

    2017-12-01

    The Next Generation Science Standards and A Framework for K-12 Science Education have introduced us to 3-dimensional science instruction. Together, these provide infinite opportunities to generate interesting problems inspiring instruction and motivating student learning. Finding good resources to support 3-dimensional learning is challenging. The Climate Literacy and Energy Awareness Network (CLEAN) as a comprehensive source of high-quality, NGSS-aligned resources that can be quickly and easily searched. Furthermore, teachers new to NGSS are asked to do the following: synthesize high quality, scientifically vetted resources to engage students in relevant phenomena, problems and projects develop place-awareness for where students live and learn encourage data analysis, modeling, and argumentation skills energize students to participate in finding possible solutions to the problems we face. These challenges are intensified when teaching climate science and energy technology, some of the most rapidly changing science and engineering fields. Educators can turn to CLEAN to find scientifically and pedagogically vetted resources to integrate into their lessons. In this presentation, we will introduce the newly developed Harmonics Planning Template, Guidance Videos and Flowchart that guide the development of instructionally-sound, NGSS-style units using the CLEAN collection of resources. To illustrate the process, three example units will be presented: Phenology - a place-based investigation, Debating the Grid - a deliberation on optimal energy grid solutions, and History of Earth's Atmosphere and Oceans - a data-rich collaborative investigation.

  11. Using femtosecond laser to fabricate highly precise interior three-dimensional microstructures in polymeric flow chip

    PubMed Central

    Lee, Chia-Yu; Chang, Ting-Chou; Wang, Shau-Chun; Chien, Chih-Wei; Cheng, Chung-Wei

    2010-01-01

    This paper reports using femtosecond laser marker to fabricate the three-dimensional interior microstructures in one closed flow channel of plastic substrate. Strip-like slots in the dimensions of 800 μm×400 μm×65 μm were ablated with pulse Ti:sapphire laser at 800 nm (pulse duration of ∼120 fs with 1 kHz repetition rate) on acrylic slide. After ablation, defocused beams were used to finish the surface of microstructures. Having finally polished with sonication, the laser fabricated structures are highly precise with the arithmetic roughness of 1.5 and 4.5 nm. Fabricating such highly precise microstructures cannot be accomplished with nanosecond laser marking or other mechanical drilling methods. In addition, since laser ablation can directly engrave interior microstructures in one closed chip, glue smearing problems to damage molded microstructures possibly to occur during the chip sealing procedures can be avoided too. PMID:21079695

  12. Using femtosecond laser to fabricate highly precise interior three-dimensional microstructures in polymeric flow chip.

    PubMed

    Lee, Chia-Yu; Chang, Ting-Chou; Wang, Shau-Chun; Chien, Chih-Wei; Cheng, Chung-Wei

    2010-10-18

    This paper reports using femtosecond laser marker to fabricate the three-dimensional interior microstructures in one closed flow channel of plastic substrate. Strip-like slots in the dimensions of 800 μm×400 μm×65 μm were ablated with pulse Ti:sapphire laser at 800 nm (pulse duration of ∼120 fs with 1 kHz repetition rate) on acrylic slide. After ablation, defocused beams were used to finish the surface of microstructures. Having finally polished with sonication, the laser fabricated structures are highly precise with the arithmetic roughness of 1.5 and 4.5 nm. Fabricating such highly precise microstructures cannot be accomplished with nanosecond laser marking or other mechanical drilling methods. In addition, since laser ablation can directly engrave interior microstructures in one closed chip, glue smearing problems to damage molded microstructures possibly to occur during the chip sealing procedures can be avoided too.

  13. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  14. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  15. Three-Dimensional (3-D) Printing: A Cost-Effective Solution for Improving Global Accessibility to Prostheses.

    PubMed

    Silva, Kyle; Rand, Stephanie; Cancel, David; Chen, Yuxi; Kathirithamby, Rani; Stern, Michelle

    2015-12-01

    The lack of access to prostheses is a global problem, partially caused by the high cost associated with the current manufacturing process. Three-dimensional printing is gaining use in the medical field, and one such area is prosthetics. In addition to using cost-effective materials, this technology allows for rapid prototyping, making it an efficient solution for the development of affordable prostheses. If the rehabilitation medicine community embraces this novel technology, we can help alleviate the global disparity of access to prostheses. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  16. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  17. Batch-mode Reinforcement Learning for improved hydro-environmental systems management

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.

    2010-12-01

    Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.

  18. A collective phase in resource competition in a highly diverse ecosystem

    NASA Astrophysics Data System (ADS)

    Tikhonov, Mikhail; Monasson, Remi

    Recent technological advances uncovered that most habitats, including the human body, harbor hundreds of coexisting microbial ``species''. The problem of understanding such complex communities is currently at the forefront of medical and environmental sciences. A particularly intriguing question is whether the high-diversity regime (large number of species N) gives rise to qualitatively novel phenomena that could not be intuited from analysis of low-dimensional models (with few species). However, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to show that the large- N limit of a classic ecological model of resource competition introduced by MacArthur in 1969 can be solved analytically. Our results provide a tractable model where the implications of large dimensionality of eco-evolutionary problems can be investigated. In particular, we show that at high diversity, the MacArthur model exhibits a phase transition into a curious regime where the environment constructed by the community becomes a collective property, insensitive to the external conditions such as the total resource influx supplied to the community. Supported by Harvard Center of Mathematical Sciences and Applications, and the Simons Foundation. This work was completed at the Aspen Center for Physics, supported by National Science Foundation Grant PHY-1066293.

  19. Towards effective interactive three-dimensional colour postprocessing

    NASA Technical Reports Server (NTRS)

    Bailey, B. C.; Hajjar, J. F.; Abel, J. F.

    1986-01-01

    Recommendations for the development of effective three-dimensional, graphical color postprocessing are made. First, the evaluation of large, complex numerical models demands that a postprocessor be highly interactive. A menu of available functions should be provided and these operations should be performed quickly so that a sense of continuity and spontaneity exists during the post-processing session. Second, an agenda for three-dimensional color postprocessing is proposed. A postprocessor must be versatile with respect to application and basic algorithms must be designed so that they are flexible. A complete selection of tools is necessary to allow arbitrary specification of views, extraction of qualitative information, and access to detailed quantitative and problem information. Finally, full use of advanced display hardware is necessary if interactivity is to be maximized and effective postprocessing of today's numerical simulations is to be achieved.

  20. Novel Driving Method for Two-Dimensional and Three-Dimensional Switchable Active Matrix Organic Light-Emitting Diode Displays for Emission and Programming Time Extension

    NASA Astrophysics Data System (ADS)

    In, Hai-Jung; Kwon, Oh-Kyong

    2012-03-01

    A novel driving method for two-dimensional (2D) and three-dimensional (3D) switchable active matrix organic light-emitting diode (AMOLED) displays is proposed to extend emission time and data programming time during 3D display operation. The proposed pixel consists of six thin-film transistors (TFTs) and two capacitors, and the aperture ratio of the pixel is 45.8% under 40-in. full-high-definition television condition. By increasing emission time and programming time, the flicker problem can be reduced and the lifetime of AMOLED displays can be extended owing to the decrease in emission current density. Simulation results show that the emission current error range from -0.4 to 1.6% is achieved when the threshold voltage variation of driving TFTs is in the range from -1.0 to 1.0 V, and the emission current error is 1.0% when the power line IR-drop is 2.0 V.

  1. An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification

    PubMed Central

    Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos

    2015-01-01

    This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015

  2. Exact Analytical Solutions for Elastodynamic Impact

    DTIC Science & Technology

    2015-11-30

    corroborated by derivation of exact discrete solutions from recursive equations for the impact problems. 15. SUBJECT TERMS One-dimensional impact; Elastic...wave propagation; Laplace transform; Floor function; Discrete solutions 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18...impact Elastic wave propagation Laplace transform Floor function Discrete solutionsWe consider the one-dimensional impact problem in which a semi

  3. A boundary element alternating method for two-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Krishnamurthy, T.

    1992-01-01

    A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.

  4. Identification of the heat transfer coefficient in the two-dimensional model of binary alloy solidification

    NASA Astrophysics Data System (ADS)

    Hetmaniok, Edyta; Hristov, Jordan; Słota, Damian; Zielonka, Adam

    2017-05-01

    The paper presents the procedure for solving the inverse problem for the binary alloy solidification in a two-dimensional space. This is a continuation of some previous works of the authors investigating a similar problem but in the one-dimensional domain. Goal of the problem consists in identification of the heat transfer coefficient on boundary of the region and in reconstruction of the temperature distribution inside the considered region in case when the temperature measurements in selected points of the alloy are known. Mathematical model of the problem is based on the heat conduction equation with the substitute thermal capacity and with the liquidus and solidus temperatures varying in dependance on the concentration of the alloy component. For describing this concentration the Scheil model is used. Investigated procedure involves also the parallelized Ant Colony Optimization algorithm applied for minimizing a functional expressing the error of approximate solution.

  5. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  6. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  7. Renormalization Group Studies and Monte Carlo Simulation for Quantum Spin Systems.

    NASA Astrophysics Data System (ADS)

    Pan, Ching-Yan

    We have discussed the extended application of various real space renormalization group methods to the quantum spin systems. At finite temperature, we extended both the reliability and range of application of the decimation renormalization group method (DRG) for calculating the thermal and magnetic properties of low-dimensional quantum spin chains, in which we have proposed general models of the three-state Potts model and the general Heisenberg model. Some interesting finite-temperature behavior of the models has been obtained. We also proposed a general formula for the critical properties of the n-dimensional q-state Potts model by using a modified migdal-Kadanoff approach which is in very good agreement with all available results for general q and d. For high-spin systems, we have investigated the famous Haldane's prediction by using a modified block renormalization group approach in spin -1over2, spin-1 and spin-3 over2 cases. Our result supports Haldane's prediction and a novel property of the spin-1 Heisenberg antiferromagnet has been predicted. A modified quantum monte Carlo simulation approach has been developed in this study which we use to treat quantum interacting problems (we only work on quantum spin systems in this study) without the "negative sign problem". We also obtain with the Monte Carlo approach the numerical derivative directly. Furthermore, using this approach we have obtained the energy spectrum and the thermodynamic properties of the antiferromagnetic q-state Potts model, and have studied the q-color problem with the result which supports Mattis' recent conjecture of entropy for the n -dimensional q-state Potts antiferromagnet. We also find a general solution for the q-color problem in d dimensions.

  8. On the Measure and the Structure of the Free Boundary of the Lower Dimensional Obstacle Problem

    NASA Astrophysics Data System (ADS)

    Focardi, Matteo; Spadaro, Emanuele

    2018-04-01

    We provide a thorough description of the free boundary for the lower dimensional obstacle problem in R^{n+1} up to sets of null H^{n-1} measure. In particular, we prove (i) local finiteness of the (n-1)-dimensional Hausdorff measure of the free boundary, (ii) H^{n-1}-rectifiability of the free boundary, (iii) classification of the frequencies up to a set of Hausdorff dimension at most (n-2) and classification of the blow-ups at H^{n-1} almost every free boundary point.

  9. Inverse problems in the modeling of vibrations of flexible beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Powers, R. K.; Rosen, I. G.

    1987-01-01

    The formulation and solution of inverse problems for the estimation of parameters which describe damping and other dynamic properties in distributed models for the vibration of flexible structures is considered. Motivated by a slewing beam experiment, the identification of a nonlinear velocity dependent term which models air drag damping in the Euler-Bernoulli equation is investigated. Galerkin techniques are used to generate finite dimensional approximations. Convergence estimates and numerical results are given. The modeling of, and related inverse problems for the dynamics of a high pressure hose line feeding a gas thruster actuator at the tip of a cantilevered beam are then considered. Approximation and convergence are discussed and numerical results involving experimental data are presented.

  10. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  11. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  12. Microgravity isolation system design: A modern control synthesis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.

  13. Sparsity enabled cluster reduced-order models for control

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  14. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  15. Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Vrettas, M. D.; Cornford, D.; Opper, M.

    2013-12-01

    Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.

  16. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  17. Difference equation state approximations for nonlinear hereditary control problems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1984-01-01

    Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589

  18. GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA

    PubMed Central

    Zheng, Qi; Peng, Limin; He, Xuming

    2015-01-01

    Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424

  19. The problem of dimensional instability in airfoil models for cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Wigley, D. A.

    1982-01-01

    The problem of dimensional instability in airfoil models for cryogenic wind tunnels is discussed in terms of the various mechanisms that can be responsible. The interrelationship between metallurgical structure and possible dimensional instability in cryogenic usage is discussed for those steel alloys of most interest for wind tunnel model construction at this time. Other basic mechanisms responsible for setting up residual stress systems are discussed, together with ways in which their magnitude may be reduced by various elevated or low temperature thermal cycles. A standard specimen configuration is proposed for use in experimental investigations into the effects of machining, heat treatment, and other variables that influence the dimensional stability of the materials of interest. A brief classification of various materials in terms of their metallurgical structure and susceptability to dimensional instability is presented.

  20. Proceedings of the workshop on high resolution computed microtomography (CMT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at themore » frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less

  1. A discontinuous Galerkin method for nonlinear parabolic equations and gradient flow problems with interaction potentials

    NASA Astrophysics Data System (ADS)

    Sun, Zheng; Carrillo, José A.; Shu, Chi-Wang

    2018-01-01

    We consider a class of time-dependent second order partial differential equations governed by a decaying entropy. The solution usually corresponds to a density distribution, hence positivity (non-negativity) is expected. This class of problems covers important cases such as Fokker-Planck type equations and aggregation models, which have been studied intensively in the past decades. In this paper, we design a high order discontinuous Galerkin method for such problems. If the interaction potential is not involved, or the interaction is defined by a smooth kernel, our semi-discrete scheme admits an entropy inequality on the discrete level. Furthermore, by applying the positivity-preserving limiter, our fully discretized scheme produces non-negative solutions for all cases under a time step constraint. Our method also applies to two dimensional problems on Cartesian meshes. Numerical examples are given to confirm the high order accuracy for smooth test cases and to demonstrate the effectiveness for preserving long time asymptotics.

  2. Rare events modeling with support vector machine: Application to forecasting large-amplitude geomagnetic substorms and extreme events in financial markets.

    NASA Astrophysics Data System (ADS)

    Gavrishchaka, V. V.; Ganguli, S. B.

    2001-12-01

    Reliable forecasting of rare events in a complex dynamical system is a challenging problem that is important for many practical applications. Due to the nature of rare events, data set available for construction of the statistical and/or machine learning model is often very limited and incomplete. Therefore many widely used approaches including such robust algorithms as neural networks can easily become inadequate for rare events prediction. Moreover in many practical cases models with high-dimensional inputs are required. This limits applications of the existing rare event modeling techniques (e.g., extreme value theory) that focus on univariate cases. These approaches are not easily extended to multivariate cases. Support vector machine (SVM) is a machine learning system that can provide an optimal generalization using very limited and incomplete training data sets and can efficiently handle high-dimensional data. These features may allow to use SVM to model rare events in some applications. We have applied SVM-based system to the problem of large-amplitude substorm prediction and extreme event forecasting in stock and currency exchange markets. Encouraging preliminary results will be presented and other possible applications of the system will be discussed.

  3. Dynamic Shape Reconstruction of Three-Dimensional Frame Structures Using the Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Gherlone, Marco; Cerracchio, Priscilla; Mattone, Massimiliano; Di Sciuva, Marco; Tessler, Alexander

    2011-01-01

    A robust and efficient computational method for reconstructing the three-dimensional displacement field of truss, beam, and frame structures, using measured surface-strain data, is presented. Known as shape sensing , this inverse problem has important implications for real-time actuation and control of smart structures, and for monitoring of structural integrity. The present formulation, based on the inverse Finite Element Method (iFEM), uses a least-squares variational principle involving strain measures of Timoshenko theory for stretching, torsion, bending, and transverse shear. Two inverse-frame finite elements are derived using interdependent interpolations whose interior degrees-of-freedom are condensed out at the element level. In addition, relationships between the order of kinematic-element interpolations and the number of required strain gauges are established. As an example problem, a thin-walled, circular cross-section cantilevered beam subjected to harmonic excitations in the presence of structural damping is modeled using iFEM; where, to simulate strain-gauge values and to provide reference displacements, a high-fidelity MSC/NASTRAN shell finite element model is used. Examples of low and high-frequency dynamic motion are analyzed and the solution accuracy examined with respect to various levels of discretization and the number of strain gauges.

  4. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction.

    PubMed

    Tian, Xinyu; Wang, Xuefeng; Chen, Jun

    2014-01-01

    Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.

  5. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    PubMed Central

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  6. The Ensemble Kalman filter: a signal processing perspective

    NASA Astrophysics Data System (ADS)

    Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik

    2017-12-01

    The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.

  7. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  8. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  9. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  10. Estimation of Magnetic Field Growth and Construction of Adaptive Mesh in Corner Domain for the Magnetostatic Problem in Three-Dimensional Space

    NASA Astrophysics Data System (ADS)

    Perepelkin, Eugene; Tarelkin, Aleksandr

    2018-02-01

    A magnetostatics problem arises when searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundary-value problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require consideration of the solution behavior in the corner domain. In this work we obtained an upper estimation of the magnetic field growth using integral formulation of the magnetostatic problem and propose a method for condensing the differential mesh near the corner domain of the vacuum in the three-dimensional space based on this estimation.

  11. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  12. Heat transfer in aeropropulsion systems

    NASA Astrophysics Data System (ADS)

    Simoneau, R. J.

    1985-07-01

    Aeropropulsion heat transfer is reviewed. A research methodology based on a growing synergism between computations and experiments is examined. The aeropropulsion heat transfer arena is identified as high Reynolds number forced convection in a highly disturbed environment subject to strong gradients, body forces, abrupt geometry changes and high three dimensionality - all in an unsteady flow field. Numerous examples based on heat transfer to the aircraft gas turbine blade are presented to illustrate the types of heat transfer problems which are generic to aeropropulsion systems. The research focus of the near future in aeropropulsion heat transfer is projected.

  13. Heat transfer in aeropropulsion systems

    NASA Technical Reports Server (NTRS)

    Simoneau, R. J.

    1985-01-01

    Aeropropulsion heat transfer is reviewed. A research methodology based on a growing synergism between computations and experiments is examined. The aeropropulsion heat transfer arena is identified as high Reynolds number forced convection in a highly disturbed environment subject to strong gradients, body forces, abrupt geometry changes and high three dimensionality - all in an unsteady flow field. Numerous examples based on heat transfer to the aircraft gas turbine blade are presented to illustrate the types of heat transfer problems which are generic to aeropropulsion systems. The research focus of the near future in aeropropulsion heat transfer is projected.

  14. High-order ENO schemes applied to two- and three-dimensional compressible flow

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley

    1991-01-01

    High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.

  15. Squeezing the Efimov effect

    NASA Astrophysics Data System (ADS)

    Sandoval, J. H.; Bellotti, F. F.; Yamashita, M. T.; Frederico, T.; Fedorov, D. V.; Jensen, A. S.; Zinner, N. T.

    2018-03-01

    The quantum mechanical three-body problem is a source of continuing interest due to its complexity and not least due to the presence of fascinating solvable cases. The prime example is the Efimov effect where infinitely many bound states of identical bosons can arise at the threshold where the two-body problem has zero binding energy. An important aspect of the Efimov effect is the effect of spatial dimensionality; it has been observed in three dimensional systems, yet it is believed to be impossible in two dimensions. Using modern experimental techniques, it is possible to engineer trap geometry and thus address the intricate nature of quantum few-body physics as function of dimensionality. Here we present a framework for studying the three-body problem as one (continuously) changes the dimensionality of the system all the way from three, through two, and down to a single dimension. This is done by considering the Efimov favorable case of a mass-imbalanced system and with an external confinement provided by a typical experimental case with a (deformed) harmonic trap.

  16. Assessment of numerical techniques for unsteady flow calculations

    NASA Technical Reports Server (NTRS)

    Hsieh, Kwang-Chung

    1989-01-01

    The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.

  17. A Two-Dimensional Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  18. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.

  19. Two Dimensional Finite Element Analysis for the Effect of a Pressure Wave in the Human Brain

    NASA Astrophysics Data System (ADS)

    Ponce L., Ernesto; Ponce S., Daniel

    2008-11-01

    Brain injuries in people of all ages is a serious, world-wide health problem, with consequences as varied as attention or memory deficits, difficulties in problem-solving, aggressive social behavior, and neuro degenerative diseases such as Alzheimer's and Parkinson's. Brain injuries can be the result of a direct impact, but also pressure waves and direct impulses. The aim of this work is to develop a predictive method to calculate the stress generated in the human brain by pressure waves such as high power sounds. The finite element method is used, combined with elastic wave theory. The predictions of the generated stress levels are compared with the resistance of the arterioles that pervade the brain. The problem was focused to the Chilean mining where there are some accidents happen by detonations and high sound level. There are not formal medical investigation, however these pressure waves could produce human brain damage.

  20. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  1. Solution of Radiation and Convection Heat-Transfer Problems

    NASA Technical Reports Server (NTRS)

    Oneill, R. F.

    1986-01-01

    Computer program P5399B developed to accommodate variety of fin-type heat conduction applications involving radiative or convective boundary conditions with additionally imposed local heat flux. Program also accommodates significant variety of one-dimensional heat-transfer problems not corresponding specifically to fin-type applications. Program easily accommodates all but few specialized one-dimensional heat-transfer analyses as well as many twodimensional analyses.

  2. 2D and 3D Traveling Salesman Problem

    ERIC Educational Resources Information Center

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  3. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data.

    PubMed

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer

    2015-03-09

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.

  4. Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria

    NASA Astrophysics Data System (ADS)

    Kowalczuk, Zdzisław; Białaszewski, Tomasz

    2018-01-01

    A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.

  5. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE PAGES

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; ...

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o f in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce themore » number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.« less

  6. Wavenumber-extended high-order oscillation control finite volume schemes for multi-dimensional aeroacoustic computations

    NASA Astrophysics Data System (ADS)

    Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong

    2008-04-01

    A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.

  7. High precise measurement of tiny angle dimensional holes for the unit-holes of the LAMOST Focal Plane Plate

    NASA Astrophysics Data System (ADS)

    Zhou, Zengxiang; Jin, Yi; Zhai, Chao; Xing, Xiaozheng

    2008-07-01

    In the LAMOST project, the unit-holes on the Focal Plane Plate are the final installation location of the optical fiber positioning system. Theirs precision will influence the observation efficiency of the LAMOST. For the unique requirements, the unit-holes on the Focal Plane Plate are composed by a series of tiny angle dimensional holes which dimensional angle are between 16' to 2.5°. According to these requirements, the measurement of the tiny angle dimensional holes for the unit-holes needs to less than 3'. And all the unit-holes point to the virtual sphere center of the Focal Plane Plate. To that end, the angle departure of the unit-holes axis is changed to the distance from the virtual sphere center of Focal Plane Plate to the unit-holes axis. That is the better way to evaluate the technical requirements of the dimensional angle errors. In the measuring process, common measuring methods do not fit for the tiny angle dimensional hole by CMM(coordinate measurement machine). An extraordinary way to solve this problem is to insert a measuring stick into a unit-hole, with a target ball on the stick. Then measure the low point of the ball center and pull out the stick for the high station of center. Finally, calculate the two points for the unit-hole axis to get the angle departure. But on the other hand, use this methods will bring extra errors for the measuring stick and the target ball. For better analysis this question, a series experiments are mentioned in this paper, which testify that the influence of the measure implement is little. With increasing the distance between the low point and the high point position in the measuring process should enhance the accuracy of dimensional angle measurement.

  8. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  9. Superalgebraically convergent smoothly windowed lattice sums for doubly periodic Green functions in three-dimensional space

    PubMed Central

    Bruno, Oscar P.; Turc, Catalin; Venakides, Stephanos

    2016-01-01

    This work, part I in a two-part series, presents: (i) a simple and highly efficient algorithm for evaluation of quasi-periodic Green functions, as well as (ii) an associated boundary-integral equation method for the numerical solution of problems of scattering of waves by doubly periodic arrays of scatterers in three-dimensional space. Except for certain ‘Wood frequencies’ at which the quasi-periodic Green function ceases to exist, the proposed approach, which is based on smooth windowing functions, gives rise to tapered lattice sums which converge superalgebraically fast to the Green function—that is, faster than any power of the number of terms used. This is in sharp contrast to the extremely slow convergence exhibited by the lattice sums in the absence of smooth windowing. (The Wood-frequency problem is treated in part II.) This paper establishes rigorously the superalgebraic convergence of the windowed lattice sums. A variety of numerical results demonstrate the practical efficiency of the proposed approach. PMID:27493573

  10. BBC users manual. [In LRLTRAN for CDC 7600 and STAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ltterst, R. F.; Sutcliffe, W. G.; Warshaw, S. I.

    1977-11-01

    BBC is a two-dimensional, multifluid Eulerian hydro-radiation code based on KRAKEN and some subsequent ideas. It was developed in the explosion group in T-Division as a basic two-dimensional code to which various types of physics can be added. For this reason BBC is a FORTRAN (LRLTRAN) code. In order to gain the 2-to-1 to 4-to-1 speed advantage of the STACKLIB software on the 7600's and to be able to execute at high speed on the STAR, the vector extensions of LRLTRAN (STARTRAN) are used throughout the code. Either cylindrical- or slab-type problems can be run on BBC. The grid ismore » bounded by a rectangular band of boundary zones. The interfaces between the regular and boundary zones can be selected to be either rigid or nonrigid. The setup for BBC problems is described in the KEG Manual and LEG Manual. The difference equations are described in BBC Hydrodynamics. Basic input and output for BBC are described.« less

  11. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  12. Multi-dimensional high order essentially non-oscillatory finite difference methods in generalized coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1992-01-01

    The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.

  13. Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects.

    PubMed

    Sidorenko, Pavel; Kfir, Ofer; Shechtman, Yoav; Fleischer, Avner; Eldar, Yonina C; Segev, Mordechai; Cohen, Oren

    2015-09-08

    Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa.

  14. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.

  15. Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...

    2016-08-04

    This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less

  16. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  17. Indoor high precision three-dimensional positioning system based on visible light communication using modified genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Guan, Weipeng; Li, Simin; Wu, Yuxiang

    2018-04-01

    To improve the precision of indoor positioning and actualize three-dimensional positioning, a reversed indoor positioning system based on visible light communication (VLC) using genetic algorithm (GA) is proposed. In order to solve the problem of interference between signal sources, CDMA modulation is used. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) code using CDMA modulation. Receiver receives mixed signal from every LED reference point, by the orthogonality of spreading code in CDMA modulation, ID information and intensity attenuation information from every LED can be obtained. According to positioning principle of received signal strength (RSS), the coordinate of the receiver can be determined. Due to system noise and imperfection of device utilized in the system, distance between receiver and transmitters will deviate from the real value resulting in positioning error. By introducing error correction factors to global parallel search of genetic algorithm, coordinates of the receiver in three-dimensional space can be determined precisely. Both simulation results and experimental results show that in practical application scenarios, the proposed positioning system can realize high precision positioning service.

  18. Some problems of the calculation of three-dimensional boundary layer flows on general configurations

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.

    1973-01-01

    An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.

  19. Progress with multigrid schemes for hypersonic flow problems

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Swanson, R. C.

    1991-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.

  20. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  1. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Implementation of pattern generation algorithm in forming Gilmore and Gomory model for two dimensional cutting stock problem

    NASA Astrophysics Data System (ADS)

    Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.

    2018-01-01

    Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.

  3. User's manual for three dimensional FDTD version B code for scattering from frequency-dependent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file, a discussion of radar cross section computations, a discussion of some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  4. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  5. A decentralized square root information filter/smoother

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  6. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  7. An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart

    2010-01-01

    This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.

  8. Using Three-Dimensional Printing to Fabricate a Tubing Connector for Dilation and Evacuation.

    PubMed

    Stitely, Michael L; Paterson, Helen

    2016-02-01

    This is a proof-of-concept study to show that simple instrumentation problems encountered in surgery can be solved by fabricating devices using a three-dimensional printer. The device used in the study is a simple tubing connector fashioned to connect two segments of suction tubing used in a surgical procedure where no commercially available product for this use is available through our usual suppliers in New Zealand. A cylindrical tubing connector was designed using three-dimensional printing design software. The tubing connector was fabricated using the Makerbot Replicator 2X three-dimensional printer. The connector was used in 15 second-trimester dilation and evacuation procedures. Data forms were completed by the primary operating surgeon. Descriptive statistics were used with the expectation that the device would function as intended in all cases. The three-dimensional printed tubing connector functioned as intended in all 15 instances. Commercially available three-dimensional printing technology can be used to overcome simple instrumentation problems encountered during gynecologic surgical procedures.

  9. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    PubMed

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  10. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators

    PubMed Central

    Yin, Kedong; Yang, Benshuo

    2018-01-01

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849

  11. Detection of Epistasis for Flowering Time Using Bayesian Multilocus Estimation in a Barley MAGIC Population

    PubMed Central

    Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J.

    2018-01-01

    Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. PMID:29254994

  12. The Use of Signal Dimensionality for Automatic QC of Seismic Array Data

    NASA Astrophysics Data System (ADS)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.; Draganov, D.; Maceira, M.; Gomez, M.

    2014-12-01

    A significant problem in seismic array analysis is the inclusion of bad sensor channels in the beam-forming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by-node basis, so the dimensionality of the node traffic is instead monitored for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. We examine the signal dimension in similar way to the method addressing node traffic anomalies in large computer systems. We explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for identifying bad array elements. We show preliminary results applied to arrays in Kazakhstan (Makanchi) and Argentina (Malargue).

  13. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  14. Gene selection for microarray data classification via subspace learning and manifold regularization.

    PubMed

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  15. A discontinuous Galerkin method for two-dimensional PDE models of Asian options

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.; Cvejnová, D.

    2016-06-01

    In our previous research we have focused on the problem of plain vanilla option valuation using discontinuous Galerkin method for numerical PDE solution. Here we extend a simple one-dimensional problem into two-dimensional one and design a scheme for valuation of Asian options, i.e. options with payoff depending on the average of prices collected over prespecified horizon. The algorithm is based on the approach combining the advantages of the finite element methods together with the piecewise polynomial generally discontinuous approximations. Finally, an illustrative example using DAX option market data is provided.

  16. Pressure distribution under flexible polishing tools. II - Cylindrical (conical) optics

    NASA Astrophysics Data System (ADS)

    Mehta, Pravin K.

    1990-10-01

    A previously developed eigenvalue model is extended to determine polishing pressure distribution by rectangular tools with unequal stiffness in two directions on cylindrical optics. Tool misfit is divided into two simplified one-dimensional problems and one simplified two-dimensional problem. Tools with nonuniform cross-sections are treated with a new one-dimensional eigenvalue algorithm, permitting evaluation of tool designs where the edge is more flexible than the interior. This maintains edge pressure variations within acceptable parameters. Finite element modeling is employed to resolve upper bounds, which handle pressure changes in the two-dimensional misfit element. Paraboloids and hyperboloids from the NASA AXAF system are treated with the AXAFPOD software for this method, and are verified with NASTRAN finite element analyses. The maximum deviation from the one-dimensional azimuthal pressure variation is predicted to be 10 percent and 20 percent for paraboloids and hyperboloids, respectively.

  17. Preparation of a Three-Dimensional Full Thickness Skin Equivalent.

    PubMed

    Reuter, Christian; Walles, Heike; Groeber, Florian

    2017-01-01

    In vitro test systems are a promising alternative to animal models. Due to the use of human cells in a three-dimensional arrangement that allows cell-cell or cell-matrix interactions these models may be more predictive for the human situation compared to animal models or two-dimensional cell culture systems. Especially for dermatological research, skin models such as epidermal or full-thickness skin equivalents (FTSE) are used for different applications. Although epidermal models provide highly standardized conditions for risk assessment, FTSE facilitate a cellular crosstalk between the dermal and epidermal layer and thus can be used as more complex models for the investigation of processes such as wound healing, skin development, or infectious diseases. In this chapter, we describe the generation and culture of an FTSE, based on a collagen type I matrix and provide troubleshooting tips for commonly encountered technical problems.

  18. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  19. The quantum n-body problem in dimension d ⩾ n – 1: ground state

    NASA Astrophysics Data System (ADS)

    Miller, Willard, Jr.; Turbiner, Alexander V.; Escobar-Ruiz, M. A.

    2018-05-01

    We employ generalized Euler coordinates for the n body system in dimensional space, which consists of the centre-of-mass vector, relative (mutual) mass-independent distances r ij and angles as remaining coordinates. We prove that the kinetic energy of the quantum n-body problem for can be written as the sum of three terms: (i) kinetic energy of centre-of-mass, (ii) the second order differential operator which depends on relative distances alone and (iii) the differential operator which annihilates any angle-independent function. The operator has a large reflection symmetry group and in variables is an algebraic operator, which can be written in terms of generators of the hidden algebra . Thus, makes sense of the Hamiltonian of a quantum Euler–Arnold top in a constant magnetic field. It is conjectured that for any n, the similarity-transformed is the Laplace–Beltrami operator plus (effective) potential; thus, it describes a -dimensional quantum particle in curved space. This was verified for . After de-quantization the similarity-transformed becomes the Hamiltonian of the classical top with variable tensor of inertia in an external potential. This approach allows a reduction of the dn-dimensional spectral problem to a -dimensional spectral problem if the eigenfunctions depend only on relative distances. We prove that the ground state function of the n body problem depends on relative distances alone.

  20. A 3D finite-difference BiCG iterative solver with the Fourier-Jacobi preconditioner for the anisotropic EIT/EEG forward problem.

    PubMed

    Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D

    2014-01-01

    The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.

  1. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  2. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  3. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  4. Multicategory Composite Least Squares Classifiers

    PubMed Central

    Park, Seo Young; Liu, Yufeng; Liu, Dacheng; Scholl, Paul

    2010-01-01

    Classification is a very useful statistical tool for information extraction. In particular, multicategory classification is commonly seen in various applications. Although binary classification problems are heavily studied, extensions to the multicategory case are much less so. In view of the increased complexity and volume of modern statistical problems, it is desirable to have multicategory classifiers that are able to handle problems with high dimensions and with a large number of classes. Moreover, it is necessary to have sound theoretical properties for the multicategory classifiers. In the literature, there exist several different versions of simultaneous multicategory Support Vector Machines (SVMs). However, the computation of the SVM can be difficult for large scale problems, especially for problems with large number of classes. Furthermore, the SVM cannot produce class probability estimation directly. In this article, we propose a novel efficient multicategory composite least squares classifier (CLS classifier), which utilizes a new composite squared loss function. The proposed CLS classifier has several important merits: efficient computation for problems with large number of classes, asymptotic consistency, ability to handle high dimensional data, and simple conditional class probability estimation. Our simulated and real examples demonstrate competitive performance of the proposed approach. PMID:21218128

  5. Mutual proximity graphs for improved reachability in music recommendation.

    PubMed

    Flexer, Arthur; Stevens, Jeff

    2018-01-01

    This paper is concerned with the impact of hubness, a general problem of machine learning in high-dimensional spaces, on a real-world music recommendation system based on visualisation of a k-nearest neighbour (knn) graph. Due to a problem of measuring distances in high dimensions, hub objects are recommended over and over again while anti-hubs are nonexistent in recommendation lists, resulting in poor reachability of the music catalogue. We present mutual proximity graphs, which are an alternative to knn and mutual knn graphs, and are able to avoid hub vertices having abnormally high connectivity. We show that mutual proximity graphs yield much better graph connectivity resulting in improved reachability compared to knn graphs, mutual knn graphs and mutual knn graphs enhanced with minimum spanning trees, while simultaneously reducing the negative effects of hubness.

  6. Mutual proximity graphs for improved reachability in music recommendation

    PubMed Central

    Flexer, Arthur; Stevens, Jeff

    2018-01-01

    This paper is concerned with the impact of hubness, a general problem of machine learning in high-dimensional spaces, on a real-world music recommendation system based on visualisation of a k-nearest neighbour (knn) graph. Due to a problem of measuring distances in high dimensions, hub objects are recommended over and over again while anti-hubs are nonexistent in recommendation lists, resulting in poor reachability of the music catalogue. We present mutual proximity graphs, which are an alternative to knn and mutual knn graphs, and are able to avoid hub vertices having abnormally high connectivity. We show that mutual proximity graphs yield much better graph connectivity resulting in improved reachability compared to knn graphs, mutual knn graphs and mutual knn graphs enhanced with minimum spanning trees, while simultaneously reducing the negative effects of hubness. PMID:29348779

  7. Attention-Based Recurrent Temporal Restricted Boltzmann Machine for Radar High Resolution Range Profile Sequence Recognition.

    PubMed

    Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang

    2018-05-16

    The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.

  8. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  9. Aerodynamic Design of Axial-flow Compressors. Volume III

    NASA Technical Reports Server (NTRS)

    Johnson, Irving A; Bullock, Robert O; Graham, Robert W; Costilow, Eleanor L; Huppert, Merle C; Benser, William A; Herzig, Howard Z; Hansen, Arthur G; Jackson, Robert J; Yohner, Peggy L; hide

    1956-01-01

    Chapters XI to XIII concern the unsteady compressor operation arising when compressor blade elements stall. The fields of compressor stall and surge are reviewed in Chapters XI and XII, respectively. The part-speed operating problem in high-pressure-ratio multistage axial-flow compressors is analyzed in Chapter XIII. Chapter XIV summarizes design methods and theories that extend beyond the simplified two-dimensional approach used previously in the report. Chapter XV extends this three-dimensional treatment by summarizing the literature on secondary flows and boundary layer effects. Charts for determining the effects of errors in design parameters and experimental measurements on compressor performance are given in Chapters XVI. Chapter XVII reviews existing literature on compressor and turbine matching techniques.

  10. BI-sparsity pursuit for robust subspace recovery

    DOE PAGES

    Bian, Xiao; Krim, Hamid

    2015-09-01

    Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less

  11. Conformal mapping technique for two-dimensional porous media and jet impingement heat transfer

    NASA Technical Reports Server (NTRS)

    Siegel, R.

    1974-01-01

    Transpiration cooling and liquid metals both provide highly effective heat transfer. Using Darcy's law in porous media and the inviscid approximation for liquid metals, the local fluid velocity in these flows equals the gradient of a potential. The energy equation and flow region are simplified when transformed into potential plane coordinates. In these coordinates, the present problems are reduced to heat conduction solutions which are mapped into the physical geometry. Results are obtained for a porous region with simultaneously prescribed surface temperature and heat flux, heat transfer in a two-dimensional porous bed, and heat transfer for two liquid metal slot jets impinging on a heated plate.

  12. Conformal mapping technique for two-dimensional porous media and jet impingement heat transfer

    NASA Technical Reports Server (NTRS)

    Siegel, R.

    1973-01-01

    Transpiration cooling and liquid metals both provide highly effective heat transfer. Using Darcy's law in porous media, and the inviscid approximation for liquid metals, the local fluid velocity in these flows equals the gradient of a potential, The energy equation and flow region are simplified when transformed into potential plane coordinates. In these coordinates the present problems are reduced to heat conduction solutions which are mapped into the physical geometry. Results are obtained for a porous region with simultaneously prescribed surface temperature and heat flux, heat transfer in a two-dimensional porous bed, and heat transfer for two liquid metal slot jets impinging on a heated plate.

  13. Thomas-Fermi model for a bulk self-gravitating stellar object in two dimensions

    NASA Astrophysics Data System (ADS)

    De, Sanchari; Chakrabarty, Somenath

    2015-09-01

    In this article we have solved a hypothetical problem related to the stability and gross properties of two-dimensional self-gravitating stellar objects using the Thomas-Fermi model. The formalism presented here is an extension of the standard three-dimensional problem discussed in the book on statistical physics, Part I by Landau and Lifshitz. Further, the formalism presented in this article may be considered a class problem for post-graduate-level students of physics or may be assigned as a part of their dissertation project.

  14. Global and blowup solutions of a mixed problem with nonlinear boundary conditions for a one-dimensional semilinear wave equation

    NASA Astrophysics Data System (ADS)

    Kharibegashvili, S. S.; Jokhadze, O. M.

    2014-04-01

    A mixed problem for a one-dimensional semilinear wave equation with nonlinear boundary conditions is considered. Conditions of this type occur, for example, in the description of the longitudinal oscillations of a spring fastened elastically at one end, but not in accordance with Hooke's linear law. Uniqueness and existence questions are investigated for global and blowup solutions to this problem, in particular how they depend on the nature of the nonlinearities involved in the equation and the boundary conditions. Bibliography: 14 titles.

  15. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems

  16. Modelling of Heat and Moisture Loss Through NBC Ensembles

    DTIC Science & Technology

    1991-11-01

    the heat and moisture transport through various NBC clothing ensembles. The analysis involves simplifying the three dimensional physical problem of... clothing on a person to that of a one dimensional problem of flow through parallel layers of clothing and air. Body temperatures are calculated based on...prescribed work rates, ambient conditions and clothing properties. Sweat response and respiration rates are estimated based on empirical data to

  17. Learner Attrition in an Advanced Vocational Online Training: The Role of Computer Attitude, Computer Anxiety, and Online Learning Experience

    ERIC Educational Resources Information Center

    Stiller, Klaus D.; Köster, Annamaria

    2016-01-01

    Online learning has gained importance in education over the last 20 years, but the well-known problem of high dropout rates still persists. According to the multi-dimensional learning tasks model, the cognitive (over)load of learners is essential to attrition when dealing with five challenges (e.g. technology, user interface) of an online training…

  18. A Two-Dimensional Model of Teacher Retention and Mobility: Classroom Teachers and Their University Partners Take a Closer Look at a Vexing Problem

    ERIC Educational Resources Information Center

    Swars, Susan L.; Meyers, Barbara; Mays, Lydia C.; Lack, Brian

    2009-01-01

    This mixed-methods study is a teacher-initiated, collaborative inquiry involving a professional development school (PDS) and a university. The investigation focused on teachers' perceptions of teacher retention and mobility at their PDS. Participants were 134 teachers at a high-needs elementary school with data sources including surveys,…

  19. THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1984-07-01

    The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less

  20. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

Top