Paterson, Gillian; Power, Kevin; Yellowlees, Alex; Park, Katy; Taylor, Louise
2007-01-01
Research examining cognitive and behavioural determinants of anorexia is currently lacking. This has implications for the success of treatment programmes for anorexics, particularly, given the high reported dropout rates. This study examines two-dimensional self-esteem (comprising of self-competence and self-liking) and social problem-solving in an anorexic population and predicts that self-esteem will mediate the relationship between problem-solving and eating pathology by facilitating/inhibiting use of faulty/effective strategies. Twenty-seven anorexic inpatients and 62 controls completed measures of social problem solving and two-dimensional self-esteem. Anorexics scored significantly higher than the non-clinical group on measures of eating pathology, negative problem orientation, impulsivity/carelessness and avoidance and significantly lower on positive problem orientation and both self-esteem components. In the clinical sample, disordered eating correlated significantly with self-competence, negative problem-orientation and avoidance. Associations between disordered eating and problem solving lost significance when self-esteem was controlled in the clinical group only. Self-competence was found to be the main predictor of eating pathology in the clinical sample while self-liking, impulsivity and negative and positive problem orientation were main predictors in the non-clinical sample. Findings support the two-dimensional self-esteem theory with self-competence only being relevant to the anorexic population and support the hypothesis that self-esteem mediates the relationship between disordered eating and problem solving ability in an anorexic sample. Treatment implications include support for programmes emphasising increasing self-appraisal and self-efficacy. 2006 John Wiley & Sons, Ltd and Eating Disorders Association
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
2016-09-07
been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple
Joint principal trend analysis for longitudinal high-dimensional data.
Zhang, Yuping; Ouyang, Zhengqing
2018-06-01
We consider a research scenario motivated by integrating multiple sources of information for better knowledge discovery in diverse dynamic biological processes. Given two longitudinal high-dimensional datasets for a group of subjects, we want to extract shared latent trends and identify relevant features. To solve this problem, we present a new statistical method named as joint principal trend analysis (JPTA). We demonstrate the utility of JPTA through simulations and applications to gene expression data of the mammalian cell cycle and longitudinal transcriptional profiling data in response to influenza viral infections. © 2017, The International Biometric Society.
High-Fidelity Real-Time Simulation on Deployed Platforms
2010-08-26
three–dimensional transient heat conduction “ Swiss Cheese ” problem; and a three–dimensional unsteady incompressible Navier- Stokes low–Reynolds–number...our approach with three examples: a two?dimensional Helmholtz acoustics ?horn? problem; a three?dimensional transient heat conduction ? Swiss Cheese ...solutions; a transient lin- ear heat conduction problem in a three–dimensional “ Swiss Cheese ” configuration Ω — to illustrate treat- ment of many
NASA Astrophysics Data System (ADS)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.
Renormalization Group Studies and Monte Carlo Simulation for Quantum Spin Systems.
NASA Astrophysics Data System (ADS)
Pan, Ching-Yan
We have discussed the extended application of various real space renormalization group methods to the quantum spin systems. At finite temperature, we extended both the reliability and range of application of the decimation renormalization group method (DRG) for calculating the thermal and magnetic properties of low-dimensional quantum spin chains, in which we have proposed general models of the three-state Potts model and the general Heisenberg model. Some interesting finite-temperature behavior of the models has been obtained. We also proposed a general formula for the critical properties of the n-dimensional q-state Potts model by using a modified migdal-Kadanoff approach which is in very good agreement with all available results for general q and d. For high-spin systems, we have investigated the famous Haldane's prediction by using a modified block renormalization group approach in spin -1over2, spin-1 and spin-3 over2 cases. Our result supports Haldane's prediction and a novel property of the spin-1 Heisenberg antiferromagnet has been predicted. A modified quantum monte Carlo simulation approach has been developed in this study which we use to treat quantum interacting problems (we only work on quantum spin systems in this study) without the "negative sign problem". We also obtain with the Monte Carlo approach the numerical derivative directly. Furthermore, using this approach we have obtained the energy spectrum and the thermodynamic properties of the antiferromagnetic q-state Potts model, and have studied the q-color problem with the result which supports Mattis' recent conjecture of entropy for the n -dimensional q-state Potts antiferromagnet. We also find a general solution for the q-color problem in d dimensions.
The quantum-field renormalization group in the problem of a growing phase boundary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonov, N.V.; Vasil`ev, A.N.
1995-09-01
Within the quantum-field renormalization-group approach we examine the stochastic equation discussed by S.I. Pavlik in describing a randomly growing phase boundary. We show that, in contrast to Pavlik`s assertion, the model is not multiplicatively renormalizable and that its consistent renormalization-group analysis requires introducing an infinite number of counterterms and the respective coupling constants ({open_quotes}charge{close_quotes}). An explicit calculation in the one-loop approximation shows that a two-dimensional surface of renormalization-group points exits in the infinite-dimensional charge space. If the surface contains an infrared stability region, the problem allows for scaling with the nonuniversal critical dimensionalities of the height of the phase boundarymore » and time, {delta}{sub h} and {delta}{sub t}, which satisfy the exact relationship 2 {delta}{sub h}= {delta}{sub t} + d, where d is the dimensionality of the phase boundary. 23 refs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
NASA Astrophysics Data System (ADS)
Kuzmiak, Vladimir; Maradudin, Alexei A.
1998-09-01
We study the distribution of the electromagnetic field of the eigenmodes and corresponding group velocities associated with the photonic band structures of two-dimensional periodic systems consisting of an array of infinitely long parallel metallic rods whose intersections with a perpendicular plane form a simple square lattice. We consider both nondissipative and lossy metallic components characterized by a complex frequency-dependent dielectric function. Our analysis is based on the calculation of the complex photonic band structure obtained by using a modified plane-wave method that transforms the problem of solving Maxwell's equations into the problem of diagonalizing an equivalent non-Hermitian matrix. In order to investigate the nature and the symmetry properties of the eigenvectors, which significantly affect the optical properties of the photonic lattices, we evaluate the associated field distribution at the high symmetry points and along high symmetry directions in the two-dimensional first Brillouin zone of the periodic system. By considering both lossless and lossy metallic rods we study the effect of damping on the spatial distribution of the eigenvectors. Then we use the Hellmann-Feynman theorem and the eigenvectors and eigenfrequencies obtained from a photonic band-structure calculation based on a standard plane-wave approach applied to the nondissipative system to calculate the components of the group velocities associated with individual bands as functions of the wave vector in the first Brillouin zone. From the group velocity of each eigenmode the flow of energy is examined. The results obtained indicate a strong directional dependence of the group velocity, and confirm the experimental observation that a photonic crystal is a potentially efficient tool in controlling photon propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less
A Selective Review of Group Selection in High-Dimensional Models
Huang, Jian; Breheny, Patrick; Ma, Shuangge
2013-01-01
Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study. PMID:24174707
Yin, Kedong; Yang, Benshuo; Li, Xuemei
2018-01-24
In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.
Yin, Kedong; Yang, Benshuo
2018-01-01
In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849
Tian, Xinyu; Wang, Xuefeng; Chen, Jun
2014-01-01
Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.
NASA Astrophysics Data System (ADS)
Wang, Wei; Yang, Jiong
With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Uniform high order spectral methods for one and two dimensional Euler equations
NASA Technical Reports Server (NTRS)
Cai, Wei; Shu, Chi-Wang
1991-01-01
Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.
Öllinger, Michael; Jones, Gary; Faber, Amory H; Knoblich, Günther
2013-05-01
The 8-coin insight problem requires the problem solver to move 2 coins so that each coin touches exactly 3 others. Ormerod, MacGregor, and Chronicle (2002) explained differences in task performance across different versions of the 8-coin problem using the availability of particular moves in a 2-dimensional search space. We explored 2 further explanations by developing 6 new versions of the 8-coin problem in order to investigate the influence of grouping and self-imposed constraints on solutions. The results identified 2 sources of problem difficulty: first, the necessity to overcome the constraint that a solution can be found in 2-dimensional space and, second, the necessity to decompose perceptual groupings. A detailed move analysis suggested that the selection of moves was driven by the established representation rather than the application of the appropriate heuristics. Both results support the assumptions of representational change theory (Ohlsson, 1992).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
NASA Astrophysics Data System (ADS)
Roverso, Davide
2003-08-01
Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
The relationship between three-dimensional imaging and group decision making: an exploratory study.
Litynski, D M; Grabowski, M; Wallace, W A
1997-07-01
This paper describes an empirical investigation of the effect of three dimensional (3-D) imaging on group performance in a tactical planning task. The objective of the study is to examine the role that stereoscopic imaging can play in supporting face-to-face group problem solving and decision making-in particular, the alternative generation and evaluation processes in teams. It was hypothesized that with the stereoscopic display, group members would better visualize the information concerning the task environment, producing open communication and information exchanges. The experimental setting was a tactical command and control task, and the quality of the decisions and nature of the group decision process were investigated with three treatments: 1) noncomputerized, i.e., topographic maps with depth cues; 2) two-dimensional (2-D) imaging; and 3) stereoscopic imaging. The results were mixed on group performance. However, those groups with the stereoscopic displays generated more alternatives and spent less time on evaluation. In addition, the stereoscopic decision aid did not interfere with the group problem solving and decision-making processes. The paper concludes with a discussion of potential benefits, and the need to resolve demonstrated weaknesses of the technology.
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Learning Relative Motion Concepts in Immersive and Non-immersive Virtual Environments
NASA Astrophysics Data System (ADS)
Kozhevnikov, Michael; Gurlitt, Johannes; Kozhevnikov, Maria
2013-12-01
The focus of the current study is to understand which unique features of an immersive virtual reality environment have the potential to improve learning relative motion concepts. Thirty-seven undergraduate students learned relative motion concepts using computer simulation either in immersive virtual environment (IVE) or non-immersive desktop virtual environment (DVE) conditions. Our results show that after the simulation activities, both IVE and DVE groups exhibited a significant shift toward a scientific understanding in their conceptual models and epistemological beliefs about the nature of relative motion, and also a significant improvement on relative motion problem-solving tests. In addition, we analyzed students' performance on one-dimensional and two-dimensional questions in the relative motion problem-solving test separately and found that after training in the simulation, the IVE group performed significantly better than the DVE group on solving two-dimensional relative motion problems. We suggest that egocentric encoding of the scene in IVE (where the learner constitutes a part of a scene they are immersed in), as compared to allocentric encoding on a computer screen in DVE (where the learner is looking at the scene from "outside"), is more beneficial than DVE for studying more complex (two-dimensional) relative motion problems. Overall, our findings suggest that such aspects of virtual realities as immersivity, first-hand experience, and the possibility of changing different frames of reference can facilitate understanding abstract scientific phenomena and help in displacing intuitive misconceptions with more accurate mental models.
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
Two-and three-dimensional unsteady lift problems in high-speed flight
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Heaslet, Max A; Fuller, Franklyn B; Sluder, Loma
1952-01-01
The problem of transient lift on two- and three-dimensional wings flying at high speeds is discussed as a boundary-value problem for the classical wave equation. Kirchoff's formula is applied so that the analysis is reduced, just as in the steady state, to an investigation of sources and doublets. The applications include the evaluation of indicial lift and pitching-moment curves for two-dimensional sinking and pitching wings flying at Mach numbers equal to 0, 0.8, 1.0, 1.2 and 2.0. Results for the sinking case are also given for a Mach number of 0.5. In addition, the indicial functions for supersonic-edged triangular wings in both forward and reverse flow are presented and compared with the two-dimensional values.
Narayan, Angela J; Allen, Timothy A; Cullen, Kathryn R; Klimes-Dougan, Bonnie
2013-01-01
Objectives This comprehensive review examined the prevalence and progression of disturbances in reality testing (DRT), defined as psychotic symptoms, cognitive disruptions, and thought problems, in offspring of parents with bipolar disorder (O-BD). Our approach was grounded in a developmental psychopathology perspective and considered a broader phenotype of risk within the bipolar–schizophrenia spectrum as measured by categorical and dimensional assessments of DRT in high-risk youth. Methods Relevant studies were identified from numerous sources (e.g., PubMed, reference sections, and colleagues). Inclusion criteria were: (i) family risk studies published between 1975 and 2012 in which O-BD were contrasted with a comparison group (e.g., offspring of parents who had other psychiatric disorders or were healthy) on DRT outcomes and (ii) results reported for categorical or dimensional assessments of DRT (e.g., schizophrenia, psychotic symptoms, cluster A personality traits, or thought problems), yielding a total of 23 studies. Results Three key findings emerged: (i) categorical approaches of DRT in O-BD produced low incidence base rates and almost no evidence of significant differences in DRT between O-BD and comparison groups, whereas (ii) many studies using dimensional assessments of DRT yielded significant group differences in DRT. Furthermore, (iii) preliminary evidence from dimensional measures suggested that the developmental progression of DRT in O-BD might represent a prodrome of severe psychological impairment. Conclusions Preliminary but promising evidence suggests that DRT is a probable marker of risk for future impairment in O-BD. Methodological strengths and weaknesses, the psychometric properties of primary DRT constructs, and future directions for developmental and longitudinal research with O-BD are discussed. PMID:24034419
Computational unsteady aerodynamics for lifting surfaces
NASA Technical Reports Server (NTRS)
Edwards, John W.
1988-01-01
Two dimensional problems are solved using numerical techniques. Navier-Stokes equations are studied both in the vorticity-stream function formulation which appears to be the optimal choice for two dimensional problems, using a storage approach, and in the velocity pressure formulation which minimizes the number of unknowns in three dimensional problems. Analysis shows that compact centered conservative second order schemes for the vorticity equation are the most robust for high Reynolds number flows. Serious difficulties remain in the choice of turbulent models, to keep reasonable CPU efficiency.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
High-dimensional vector semantics
NASA Astrophysics Data System (ADS)
Andrecut, M.
In this paper we explore the “vector semantics” problem from the perspective of “almost orthogonal” property of high-dimensional random vectors. We show that this intriguing property can be used to “memorize” random vectors by simply adding them, and we provide an efficient probabilistic solution to the set membership problem. Also, we discuss several applications to word context vector embeddings, document sentences similarity, and spam filtering.
Extension of the Bgl Broad Group Cross Section Library
NASA Astrophysics Data System (ADS)
Kirilova, Desislava; Belousov, Sergey; Ilieva, Krassimira
2009-08-01
The broad group cross-section libraries BUGLE and BGL are applied for reactor shielding calculation using the DOORS package based on discrete ordinates method and multigroup approximation of the neutron cross-sections. BUGLE and BGL libraries are problem oriented for PWR or VVER type of reactors respectively. They had been generated by collapsing the problem independent fine group library VITAMIN-B6 applying PWR and VVER one-dimensional radial model of the reactor middle plane using the SCALE software package. The surveillance assemblies (SA) of VVER-1000/320 are located on the baffle above the reactor core upper edge in a region where geometry and materials differ from those of the middle plane and the neutron field gradient is very high which would result in a different neutron spectrum. That is why the application of the fore-mentioned libraries for the neutron fluence calculation in the region of SA could lead to an additional inaccuracy. This was the main reason to study the necessity for an extension of the BGL library with cross-sections appropriate for the SA region. Comparative analysis of the neutron spectra of the SA region calculated by the VITAMIN-B6 and BGL libraries using the two-dimensional code DORT have been done with purpose to evaluate the BGL applicability for SA calculation.
CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1978-01-01
CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
NASA Astrophysics Data System (ADS)
Pervishko, Anastasiia A.; Yudin, Dmitry; Shelykh, Ivan A.
2018-02-01
Lowering of the thickness of a thin-film three-dimensional topological insulator down to a few nanometers results in the gap opening in the spectrum of topologically protected two-dimensional surface states. This phenomenon, which is referred to as the anomalous finite-size effect, originates from hybridization between the states propagating along the opposite boundaries. In this work, we consider a bismuth-based topological insulator and show how the coupling to an intense high-frequency linearly polarized pumping can further be used to manipulate the value of a gap. We address this effect within recently proposed Brillouin-Wigner perturbation theory that allows us to map a time-dependent problem into a stationary one. Our analysis reveals that both the gap and the components of the group velocity of the surface states can be tuned in a controllable fashion by adjusting the intensity of the driving field within an experimentally accessible range and demonstrate the effect of light-induced band inversion in the spectrum of the surface states for high enough values of the pump.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662
Quantum states and optical responses of low-dimensional electron hole systems
NASA Astrophysics Data System (ADS)
Ogawa, Tetsuo
2004-09-01
Quantum states and their optical responses of low-dimensional electron-hole systems in photoexcited semiconductors and/or metals are reviewed from a theoretical viewpoint, stressing the electron-hole Coulomb interaction, the excitonic effects, the Fermi-surface effects and the dimensionality. Recent progress of theoretical studies is stressed and important problems to be solved are introduced. We cover not only single-exciton problems but also few-exciton and many-exciton problems, including electron-hole plasma situations. Dimensionality of the Wannier exciton is clarified in terms of its linear and nonlinear responses. We also discuss a biexciton system, exciton bosonization technique, high-density degenerate electron-hole systems, gas-liquid phase separation in an excited state and the Fermi-edge singularity due to a Mahan exciton in a low-dimensional metal.
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
Stabilizing l1-norm prediction models by supervised feature grouping.
Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha
2016-02-01
Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
2016-02-01
Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data Ekaterina Merkurjev*, Andrea...bertozzi@math.ucla.edu, xiaoran@isi.edu, lerman@isi.edu. Abstract Recent advances in clustering have included continuous relaxations of the Cheeger cut ...fully nonlinear Cheeger cut problem, as well as the ratio cut optimization task. Both problems are connected to total variation minimization, and the
Data Shared Lasso: A Novel Tool to Discover Uplift.
Gross, Samuel M; Tibshirani, Robert
2016-09-01
A model is presented for the supervised learning problem where the observations come from a fixed number of pre-specified groups, and the regression coefficients may vary sparsely between groups. The model spans the continuum between individual models for each group and one model for all groups. The resulting algorithm is designed with a high dimensional framework in mind. The approach is applied to a sentiment analysis dataset to show its efficacy and interpretability. One particularly useful application is for finding sub-populations in a randomized trial for which an intervention (treatment) is beneficial, often called the uplift problem. Some new concepts are introduced that are useful for uplift analysis. The value is demonstrated in an application to a real world credit card promotion dataset. In this example, although sending the promotion has a very small average effect, by targeting a particular subgroup with the promotion one can obtain a 15% increase in the proportion of people who purchase the new credit card.
Data Shared Lasso: A Novel Tool to Discover Uplift
Gross, Samuel M.; Tibshirani, Robert
2017-01-01
A model is presented for the supervised learning problem where the observations come from a fixed number of pre-specified groups, and the regression coefficients may vary sparsely between groups. The model spans the continuum between individual models for each group and one model for all groups. The resulting algorithm is designed with a high dimensional framework in mind. The approach is applied to a sentiment analysis dataset to show its efficacy and interpretability. One particularly useful application is for finding sub-populations in a randomized trial for which an intervention (treatment) is beneficial, often called the uplift problem. Some new concepts are introduced that are useful for uplift analysis. The value is demonstrated in an application to a real world credit card promotion dataset. In this example, although sending the promotion has a very small average effect, by targeting a particular subgroup with the promotion one can obtain a 15% increase in the proportion of people who purchase the new credit card. PMID:29056802
NASA Astrophysics Data System (ADS)
Ojeda-Guillén, D.; Mota, R. D.; Granados, V. D.
2015-03-01
We show that the (2+1)-dimensional Dirac-Moshinsky oscillator coupled to an external magnetic field can be treated algebraically with the SU(1,1) group theory and its group basis. We use the su(1,1) irreducible representation theory to find the energy spectrum and the eigenfunctions. Also, with the su(1,1) group basis we construct the relativistic coherent states in a closed form for this problem. Supported by SNI-México, COFAA-IPN, EDI-IPN, EDD-IPN, SIP-IPN project number 20140598
Classification of symmetry-protected phases for interacting fermions in two dimensions
NASA Astrophysics Data System (ADS)
Cheng, Meng; Bi, Zhen; You, Yi-Zhuang; Gu, Zheng-Cheng
2018-05-01
Recently, it has been established that two-dimensional bosonic symmetry-protected topological (SPT) phases with on-site unitary symmetry G can be completely classified by the group cohomology H3( G ,U (1 ) ) . Later, group supercohomology was proposed as a partial classification for SPT phases of interacting fermions. In this work, we revisit this problem based on the algebraic theory of symmetry and defects in two-dimensional topological phases. We reproduce the partial classifications given by group supercohomology, and we also show that with an additional H1(G ,Z2) structure, a complete classification of SPT phases for two-dimensional interacting fermion systems with a total symmetry group G ×Z2f is obtained. We also discuss the classification of interacting fermionic SPT phases protected by time-reversal symmetry.
Nonlinear Conservation Laws and Finite Volume Methods
NASA Astrophysics Data System (ADS)
Leveque, Randall J.
Introduction Software Notation Classification of Differential Equations Derivation of Conservation Laws The Euler Equations of Gas Dynamics Dissipative Fluxes Source Terms Radiative Transfer and Isothermal Equations Multi-dimensional Conservation Laws The Shock Tube Problem Mathematical Theory of Hyperbolic Systems Scalar Equations Linear Hyperbolic Systems Nonlinear Systems The Riemann Problem for the Euler Equations Numerical Methods in One Dimension Finite Difference Theory Finite Volume Methods Importance of Conservation Form - Incorrect Shock Speeds Numerical Flux Functions Godunov's Method Approximate Riemann Solvers High-Resolution Methods Other Approaches Boundary Conditions Source Terms and Fractional Steps Unsplit Methods Fractional Step Methods General Formulation of Fractional Step Methods Stiff Source Terms Quasi-stationary Flow and Gravity Multi-dimensional Problems Dimensional Splitting Multi-dimensional Finite Volume Methods Grids and Adaptive Refinement Computational Difficulties Low-Density Flows Discrete Shocks and Viscous Profiles Start-Up Errors Wall Heating Slow-Moving Shocks Grid Orientation Effects Grid-Aligned Shocks Magnetohydrodynamics The MHD Equations One-Dimensional MHD Solving the Riemann Problem Nonstrict Hyperbolicity Stiffness The Divergence of B Riemann Problems in Multi-dimensional MHD Staggered Grids The 8-Wave Riemann Solver Relativistic Hydrodynamics Conservation Laws in Spacetime The Continuity Equation The 4-Momentum of a Particle The Stress-Energy Tensor Finite Volume Methods Multi-dimensional Relativistic Flow Gravitation and General Relativity References
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Hierarchical Discriminant Analysis.
Lu, Di; Ding, Chuntao; Xu, Jinliang; Wang, Shangguang
2018-01-18
The Internet of Things (IoT) generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification) is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA). It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.
Definition of a parametric form of nonsingular Mueller matrices.
Devlaminck, Vincent; Terrier, Patrick
2008-11-01
The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.
Application of the Hughes-LIU algorithm to the 2-dimensional heat equation
NASA Technical Reports Server (NTRS)
Malkus, D. S.; Reichmann, P. I.; Haftka, R. T.
1982-01-01
An implicit explicit algorithm for the solution of transient problems in structural dynamics is described. The method involved dividing the finite elements into implicit and explicit groups while automatically satisfying the conditions. This algorithm is applied to the solution of the linear, transient, two dimensional heat equation subject to an initial condition derived from the soluton of a steady state problem over an L-shaped region made up of a good conductor and an insulating material. Using the IIT/PRIME computer with virtual memory, a FORTRAN computer program code was developed to make accuracy, stability, and cost comparisons among the fully explicit Euler, the Hughes-Liu, and the fully implicit Crank-Nicholson algorithms. The Hughes-Liu claim that the explicit group governs the stability of the entire region while maintaining the unconditional stability of the implicit group is illustrated.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
BBC users manual. [In LRLTRAN for CDC 7600 and STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ltterst, R. F.; Sutcliffe, W. G.; Warshaw, S. I.
1977-11-01
BBC is a two-dimensional, multifluid Eulerian hydro-radiation code based on KRAKEN and some subsequent ideas. It was developed in the explosion group in T-Division as a basic two-dimensional code to which various types of physics can be added. For this reason BBC is a FORTRAN (LRLTRAN) code. In order to gain the 2-to-1 to 4-to-1 speed advantage of the STACKLIB software on the 7600's and to be able to execute at high speed on the STAR, the vector extensions of LRLTRAN (STARTRAN) are used throughout the code. Either cylindrical- or slab-type problems can be run on BBC. The grid ismore » bounded by a rectangular band of boundary zones. The interfaces between the regular and boundary zones can be selected to be either rigid or nonrigid. The setup for BBC problems is described in the KEG Manual and LEG Manual. The difference equations are described in BBC Hydrodynamics. Basic input and output for BBC are described.« less
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
Intertwined Hamiltonians in two-dimensional curved spaces
NASA Astrophysics Data System (ADS)
Aghababaei Samani, Keivan; Zarei, Mina
2005-04-01
The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincaré half plane (AdS2), de Sitter plane (dS2), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle.
A Localized Ensemble Kalman Smoother
NASA Technical Reports Server (NTRS)
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
NASA Technical Reports Server (NTRS)
Makivic, Miloje S.
1996-01-01
This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.
Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions
NASA Astrophysics Data System (ADS)
Li, Xiaofei; Lee, Hyundae; Wang, Yuliang
2017-08-01
This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.
Gender approaches to evolutionary multi-objective optimization using pre-selection of criteria
NASA Astrophysics Data System (ADS)
Kowalczuk, Zdzisław; Białaszewski, Tomasz
2018-01-01
A novel idea to perform evolutionary computations (ECs) for solving highly dimensional multi-objective optimization (MOO) problems is proposed. Following the general idea of evolution, it is proposed that information about gender is used to distinguish between various groups of objectives and identify the (aggregate) nature of optimality of individuals (solutions). This identification is drawn out of the fitness of individuals and applied during parental crossover in the processes of evolutionary multi-objective optimization (EMOO). The article introduces the principles of the genetic-gender approach (GGA) and virtual gender approach (VGA), which are not just evolutionary techniques, but constitute a completely new rule (philosophy) for use in solving MOO tasks. The proposed approaches are validated against principal representatives of the EMOO algorithms of the state of the art in solving benchmark problems in the light of recognized EC performance criteria. The research shows the superiority of the gender approach in terms of effectiveness, reliability, transparency, intelligibility and MOO problem simplification, resulting in the great usefulness and practicability of GGA and VGA. Moreover, an important feature of GGA and VGA is that they alleviate the 'curse' of dimensionality typical of many engineering designs.
High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.
1994-01-01
In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.
Discovering biclusters in gene expression data based on high-dimensional linear geometries
Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong
2008-01-01
Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.
Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung
2018-05-04
Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.
High-resolution two dimensional advective transport
Smith, P.E.; Larock, B.E.
1989-01-01
The paper describes a two-dimensional high-resolution scheme for advective transport that is based on a Eulerian-Lagrangian method with a flux limiter. The scheme is applied to the problem of pure-advection of a rotated Gaussian hill and shown to preserve the monotonicity property of the governing conservation law.
Multi-view non-negative tensor factorization as relation learning in healthcare data.
Hang Wu; Wang, May D
2016-08-01
Discovering patterns in co-occurrences data between objects and groups of concepts is a useful task in many domains, such as healthcare data analysis, information retrieval, and recommender systems. These relational representations come from objects' behaviors in different views, posing a challenging task of integrating information from these views to uncover the shared latent structures. The problem is further complicated by the high dimension of data and the large ratio of missing data. We propose a new paradigm of learning semantic relations using tensor factorization, by jointly factorizing multi-view tensors and searching for a consistent underlying semantic space across each views. We formulate the idea as an optimization problem and propose efficient optimization algorithms, with a special treatment of missing data as well as high-dimensional data. Experiments results show the potential and effectiveness of our algorithms.
[Application Progress of Three-dimensional Laser Scanning Technology in Medical Surface Mapping].
Zhang, Yonghong; Hou, He; Han, Yuchuan; Wang, Ning; Zhang, Ying; Zhu, Xianfeng; Wang, Mingshi
2016-04-01
The booming three-dimensional laser scanning technology can efficiently and effectively get spatial three-dimensional coordinates of the detected object surface and reconstruct the image at high speed,high precision and large capacity of information.Non-radiation,non-contact and the ability of visualization make it increasingly popular in three-dimensional surface medical mapping.This paper reviews the applications and developments of three-dimensional laser scanning technology in medical field,especially in stomatology,plastic surgery and orthopedics.Furthermore,the paper also discusses the application prospects in the future as well as the biomedical engineering problems it would encounter with.
Reduced-order prediction of rogue waves in two-dimensional deep-water waves
NASA Astrophysics Data System (ADS)
Farazmand, Mohammad; Sapsis, Themistoklis P.
2017-07-01
We consider the problem of large wave prediction in two-dimensional water waves. Such waves form due to the synergistic effect of dispersive mixing of smaller wave groups and the action of localized nonlinear wave interactions that leads to focusing. Instead of a direct simulation approach, we rely on the decomposition of the wave field into a discrete set of localized wave groups with optimal length scales and amplitudes. Due to the short-term character of the prediction, these wave groups do not interact and therefore their dynamics can be characterized individually. Using direct numerical simulations of the governing envelope equations we precompute the expected maximum elevation for each of those wave groups. The combination of the wave field decomposition algorithm, which provides information about the statistics of the system, and the precomputed map for the expected wave group elevation, which encodes dynamical information, allows (i) for understanding of how the probability of occurrence of rogue waves changes as the spectrum parameters vary, (ii) the computation of a critical length scale characterizing wave groups with high probability of evolving to rogue waves, and (iii) the formulation of a robust and parsimonious reduced-order prediction scheme for large waves. We assess the validity of this scheme in several cases of ocean wave spectra.
NASA Astrophysics Data System (ADS)
Bataev, Vadim A.; Pupyshev, Vladimir I.; Godunov, Igor A.
2016-05-01
The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
A Selective Overview of Variable Selection in High Dimensional Feature Space
Fan, Jianqing
2010-01-01
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976
Estimation for bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Marcus, S. I.
1974-01-01
Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed.
Numerical aerodynamic simulation facility. [for flows about three-dimensional configurations
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Hathaway, A. W.
1978-01-01
Critical to the advancement of computational aerodynamics capability is the ability to simulate flows about three-dimensional configurations that contain both compressible and viscous effects, including turbulence and flow separation at high Reynolds numbers. Analyses were conducted of two solution techniques for solving the Reynolds averaged Navier-Stokes equations describing the mean motion of a turbulent flow with certain terms involving the transport of turbulent momentum and energy modeled by auxiliary equations. The first solution technique is an implicit approximate factorization finite-difference scheme applied to three-dimensional flows that avoids the restrictive stability conditions when small grid spacing is used. The approximate factorization reduces the solution process to a sequence of three one-dimensional problems with easily inverted matrices. The second technique is a hybrid explicit/implicit finite-difference scheme which is also factored and applied to three-dimensional flows. Both methods are applicable to problems with highly distorted grids and a variety of boundary conditions and turbulence models.
Modal Ring Method for the Scattering of Electromagnetic Waves
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal ring method for electromagnetic scattering from perfectly electric conducting (PEC) symmetrical bodies is presented. The scattering body is represented by a line of finite elements (triangular) on its outer surface. The infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The modal ring method effectively reduces the two dimensional scattering problem to a one-dimensional problem similar to the method of moments. The modal element method is capable of handling very high frequency scattering because it has a highly banded solution matrix.
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
NASA Astrophysics Data System (ADS)
Khadjiev, Djavvat; Ören, Idri˙s; Pekşen, Ömer
Let E2 be the 2-dimensional Euclidean space, LSim(2) be the group of all linear similarities of E2 and LSim+(2) be the group of all orientation-preserving linear similarities of E2. The present paper is devoted to solutions of problems of global G-equivalence of paths and curves in E2 for the groups G = LSim(2),LSim+(2). Complete systems of global G-invariants of a path and a curve in E2 are obtained. Existence and uniqueness theorems are given. Evident forms of a path and a curve with the given global invariants are obtained.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
NASA Astrophysics Data System (ADS)
Mishmash, Ryan V.
Experiments on strongly correlated quasi-two-dimensional electronic materials---for example, the high-temperature cuprate superconductors and the putative quantum spin liquids kappa-(BEDT-TTF)2Cu2(CN)3 and EtMe3Sb[Pd(dmit)2]2---routinely reveal highly mysterious quantum behavior which cannot be explained in terms of weakly interacting degrees of freedom. Theoretical progress thus requires the introduction of completely new concepts and machinery beyond the traditional framework of the band theory of solids and its interacting counterpart, Landau's Fermi liquid theory. In full two dimensions, controlled and reliable analytical approaches to such problems are severely lacking, as are numerical simulations of even the simplest of model Hamiltonians due to the infamous fermionic sign problem. Here, we attempt to circumvent some of these difficulties by studying analogous problems in quasi-one dimension. In this lower dimensional setting, theoretical and numerical tractability are on much stronger footing due to the methods of bosonization and the density matrix renormalization group, respectively. Using these techniques, we attack two problems: (1) the Mott transition between a Fermi liquid metal and a quantum spin liquid as potentially directly relevant to the organic compounds kappa-(BEDT-TTF)2Cu 2(CN)3 and EtMe3Sb[Pd(dmit)2] 2 and (2) non-Fermi liquid metals as strongly motivated by the strange metal phase observed in the cuprates. In both cases, we are able to realize highly exotic quantum phases as ground states of reasonable microscopic models. This lends strong credence to respective underlying slave-particle descriptions of the low-energy physics, which are inherently strongly interacting and also unconventional in comparison to weakly interacting alternatives. Finally, working in two dimensions directly, we propose a new slave-particle theory which explains in a universal way many of the intriguing experimental results of the triangular lattice organic spin liquid candidates kappa-(BEDT-TTF) 2Cu2(CN)3 and EtMe3Sb[Pd(dmit) 2]2. With use of large-scale variational Monte Carlo calculations, we show that this new state has very competitive trial energy in an effective spin model thought to describe the essential features of the real materials.
AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation
NASA Astrophysics Data System (ADS)
Zhang, S. H.; Zhang, R. F.
2017-11-01
The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated. Afterwards, the total energy for each distorted structure is calculated by the first-principles codes, e.g. VASP [3]. Finally, the second-order elastic constants are determined from the quadratic coefficients of the polynomial fitting of the energies vs strain relationships and other elastic properties are accordingly derived. References [1] http://atztogo.github.io/spglib/. [2] A. Meitzler, H.F. Tiersten, A.W. Warner, D. Berlincourt, G.A. Couqin, F.S. Welsh III, IEEE standard on piezoelectricity, Society, 1988. [3] G. Kresse, J. Furthmüller, Phys. Rev. B 54 (1996) 11169.
The quantum n-body problem in dimension d ⩾ n – 1: ground state
NASA Astrophysics Data System (ADS)
Miller, Willard, Jr.; Turbiner, Alexander V.; Escobar-Ruiz, M. A.
2018-05-01
We employ generalized Euler coordinates for the n body system in dimensional space, which consists of the centre-of-mass vector, relative (mutual) mass-independent distances r ij and angles as remaining coordinates. We prove that the kinetic energy of the quantum n-body problem for can be written as the sum of three terms: (i) kinetic energy of centre-of-mass, (ii) the second order differential operator which depends on relative distances alone and (iii) the differential operator which annihilates any angle-independent function. The operator has a large reflection symmetry group and in variables is an algebraic operator, which can be written in terms of generators of the hidden algebra . Thus, makes sense of the Hamiltonian of a quantum Euler–Arnold top in a constant magnetic field. It is conjectured that for any n, the similarity-transformed is the Laplace–Beltrami operator plus (effective) potential; thus, it describes a -dimensional quantum particle in curved space. This was verified for . After de-quantization the similarity-transformed becomes the Hamiltonian of the classical top with variable tensor of inertia in an external potential. This approach allows a reduction of the dn-dimensional spectral problem to a -dimensional spectral problem if the eigenfunctions depend only on relative distances. We prove that the ground state function of the n body problem depends on relative distances alone.
The resistance of an n-dimensional tetrahedron
NASA Astrophysics Data System (ADS)
Griffiths, Martin
2013-01-01
We consider here a problem that is suitable for introducing high-school students to the notion of generalizing shapes and solids to n dimensions. In particular, we calculate the effective resistance between any two vertices of an n-dimensional tetrahedron whose edges are each 1-Ω resistors. This leads, in a natural way, to more demanding problems, and indeed ideas for more advanced work in this area are also suggested.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Free boundary problems in shock reflection/diffraction and related transonic flow problems
Chen, Gui-Qiang; Feldman, Mikhail
2015-01-01
Shock waves are steep wavefronts that are fundamental in nature, especially in high-speed fluid flows. When a shock hits an obstacle, or a flying body meets a shock, shock reflection/diffraction phenomena occur. In this paper, we show how several long-standing shock reflection/diffraction problems can be formulated as free boundary problems, discuss some recent progress in developing mathematical ideas, approaches and techniques for solving these problems, and present some further open problems in this direction. In particular, these shock problems include von Neumann's problem for shock reflection–diffraction by two-dimensional wedges with concave corner, Lighthill's problem for shock diffraction by two-dimensional wedges with convex corner, and Prandtl-Meyer's problem for supersonic flow impinging onto solid wedges, which are also fundamental in the mathematical theory of multidimensional conservation laws. PMID:26261363
A cubic spline approximation for problems in fluid mechanics
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Graves, R. A., Jr.
1975-01-01
A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
Bataev, Vadim A; Pupyshev, Vladimir I; Godunov, Igor A
2016-05-15
The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem. Copyright © 2016 Elsevier B.V. All rights reserved.
Engineering two-photon high-dimensional states through quantum interference
Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew
2016-01-01
Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets
Plaku, Erion; Kavraki, Lydia E.
2009-01-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318
Using Betweenness Centrality to Identify Manifold Shortcuts
Cukierski, William J.; Foran, David J.
2010-01-01
High-dimensional data presents a challenge to tasks of pattern recognition and machine learning. Dimensionality reduction (DR) methods remove the unwanted variance and make these tasks tractable. Several nonlinear DR methods, such as the well known ISOMAP algorithm, rely on a neighborhood graph to compute geodesic distances between data points. These graphs can contain unwanted edges which connect disparate regions of one or more manifolds. This topological sensitivity is well known [1], [2], [3], yet handling high-dimensional, noisy data in the absence of a priori manifold knowledge, remains an open and difficult problem. This work introduces a divisive, edge-removal method based on graph betweenness centrality which can robustly identify manifold-shorting edges. The problem of graph construction in high dimension is discussed and the proposed algorithm is fit into the ISOMAP workflow. ROC analysis is performed and the performance is tested on synthetic and real datasets. PMID:20607142
Aerodynamics of an airfoil with a jet issuing from its surface
NASA Technical Reports Server (NTRS)
Tavella, D. A.; Karamcheti, K.
1982-01-01
A simple, two dimensional, incompressible and inviscid model for the problem posed by a two dimensional wing with a jet issuing from its lower surface is considered and a parametric analysis is carried out to observe how the aerodynamic characteristics depend on the different parameters. The mathematical problem constitutes a boundary value problem where the position of part of the boundary is not known a priori. A nonlinear optimization approach was used to solve the problem, and the analysis reveals interesting characteristics that may help to better understand the physics involved in more complex situations in connection with high lift systems.
Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group
NASA Astrophysics Data System (ADS)
Ardentov, Andrei A.; Sachkov, Yuri L.
2017-12-01
We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Luo, Li-Shi
2007-01-01
In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.
A new Lagrangian method for three-dimensional steady supersonic flows
NASA Technical Reports Server (NTRS)
Loh, Ching-Yuen; Liou, Meng-Sing
1993-01-01
In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
The importance of spatial ability and mental models in learning anatomy
NASA Astrophysics Data System (ADS)
Chatterjee, Allison K.
As a foundational course in medical education, gross anatomy serves to orient medical and veterinary students to the complex three-dimensional nature of the structures within the body. Understanding such spatial relationships is both fundamental and crucial for achievement in gross anatomy courses, and is essential for success as a practicing professional. Many things contribute to learning spatial relationships; this project focuses on a few key elements: (1) the type of multimedia resources, particularly computer-aided instructional (CAI) resources, medical students used to study and learn; (2) the influence of spatial ability on medical and veterinary students' gross anatomy grades and their mental models; and (3) how medical and veterinary students think about anatomy and describe the features of their mental models to represent what they know about anatomical structures. The use of computer-aided instruction (CAI) by gross anatomy students at Indiana University School of Medicine (IUSM) was assessed through a questionnaire distributed to the regional centers of the IUSM. Students reported using internet browsing, PowerPoint presentation software, and email on a daily bases to study gross anatomy. This study reveals that first-year medical students at the IUSM make limited use of CAI to study gross anatomy. Such studies emphasize the importance of examining students' use of CAI to study gross anatomy prior to development and integration of electronic media into the curriculum and they may be important in future decisions regarding the development of alternative learning resources. In order to determine how students think about anatomical relationships and describe the features of their mental models, personal interviews were conducted with select students based on students' ROT scores. Five typologies of the characteristics of students' mental models were identified and described: spatial thinking, kinesthetic approach, identification of anatomical structures, problem solving strategies, and study methods. Students with different levels of spatial ability visualize and think about anatomy in qualitatively different ways, which is reflected by the features of their mental models. Low spatial ability students thought about and used two-dimensional images from the textbook. They possessed basic two-dimensional models of anatomical structures; they placed emphasis on diagrams and drawings in their studies; and they re-read anatomical problems many times before answering. High spatial ability students thought fully in three-dimensional and imagined rotation and movement of the structures; they made use of many types of images and text as they studied and solved problems. They possessed elaborate three-dimensional models of anatomical structures which they were able to manipulate to solve problems; and they integrated diagrams, drawings, and written text in their studies. Middle spatial ability students were a mix between both low and high spatial ability students. They imagined two-dimensional images popping out of the flat paper to become more three-dimensional, but still relied on drawings and diagrams. Additionally, high spatial ability students used a higher proportion of anatomical terminology than low spatial ability or middle spatial ability students. This provides additional support to the premise that high spatial students' mental models are a complex mixture of imagistic representations and propositional representations that incorporate correct anatomical terminology. Low spatial ability students focused on the function of structures and ways to group information primarily for the purpose of recall. This supports the theory that low spatial students' mental models will be characterized by more on imagistic representations that are general in nature. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Multitasking a three-dimensional Navier-Stokes algorithm on the Cray-2
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A three-dimensional computational aerodynamics algorithm has been multitasked for efficient parallel execution on the Cray-2. It provides a means for examining the multitasking performance of a complete CFD application code. An embedded zonal multigrid scheme is used to solve the Reynolds-averaged Navier-Stokes equations for an internal flow model problem. The explicit nature of each component of the method allows a spatial partitioning of the computational domain to achieve a well-balanced task load for MIMD computers with vector-processing capability. Experiments have been conducted with both two- and three-dimensional multitasked cases. The best speedup attained by an individual task group was 3.54 on four processors of the Cray-2, while the entire solver yielded a speedup of 2.67 on four processors for the three-dimensional case. The multiprocessing efficiency of various types of computational tasks is examined, performance on two Cray-2s with different memory access speeds is compared, and extrapolation to larger problems is discussed.
Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.
2013-01-01
Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676
Group-theoretical analysis of two-dimensional hexagonal materials
NASA Astrophysics Data System (ADS)
Minami, Susumu; Sugita, Itaru; Tomita, Ryosuke; Oshima, Hiroyuki; Saito, Mineo
2017-10-01
Two-dimensional hexagonal materials such as graphene and silicene have highly symmetric crystal structures and Dirac cones at the K point, which induce novel electronic properties. In this report, we calculate their electronic structures by using density functional theory and analyze their band structures on the basis of the group theory. Dirac cones frequently appear when the symmetry at the K point is high; thus, two-dimensional irreducible representations are included. We discuss the relationship between symmetry and the appearance of the Dirac cone.
Mental Health, Social Context, Refugees and Immigrants: A Cultural Interface.
ERIC Educational Resources Information Center
Mayadas, Nazneen S.; Ramanathan, Chathapuram S.; Suarez, Zulema
1999-01-01
Explores how the lack of awareness of human diversity can adversely affect the mental health care of nondominant ethnic groups. Proposes a three-dimensional cultural-interface model for assessing and treating mental health problems. (SLD)
Simulation and Analysis of Converging Shock Wave Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less
Sánchez Pérez, J F; Conesa, M; Alhama, I; Alhama, F; Cánovas, M
2017-01-01
Classical dimensional analysis and nondimensionalization are assumed to be two similar approaches in the search for dimensionless groups. Both techniques, simplify the study of many problems. The first approach does not need to know the mathematical model, being sufficient a deep understanding of the physical phenomenon involved, while the second one begins with the governing equations and reduces them to their dimensionless form by simple mathematical manipulations. In this work, a formal protocol is proposed for applying the nondimensionalization process to ordinary differential equations, linear or not, leading to dimensionless normalized equations from which the resulting dimensionless groups have two inherent properties: In one hand, they are physically interpreted as balances between counteracting quantities in the problem, and on the other hand, they are of the order of magnitude unity. The solutions provided by nondimensionalization are more precise in every case than those from dimensional analysis, as it is illustrated by the applications studied in this work.
Lee, Seungyeoun; Kim, Yongkang; Kwon, Min-Seok; Park, Taesung
2015-01-01
Genome-wide association studies (GWAS) have extensively analyzed single SNP effects on a wide variety of common and complex diseases and found many genetic variants associated with diseases. However, there is still a large portion of the genetic variants left unexplained. This missing heritability problem might be due to the analytical strategy that limits analyses to only single SNPs. One of possible approaches to the missing heritability problem is to consider identifying multi-SNP effects or gene-gene interactions. The multifactor dimensionality reduction method has been widely used to detect gene-gene interactions based on the constructive induction by classifying high-dimensional genotype combinations into one-dimensional variable with two attributes of high risk and low risk for the case-control study. Many modifications of MDR have been proposed and also extended to the survival phenotype. In this study, we propose several extensions of MDR for the survival phenotype and compare the proposed extensions with earlier MDR through comprehensive simulation studies. PMID:26339630
High-frequency modes in a two-dimensional rectangular room with windows
NASA Astrophysics Data System (ADS)
Shabalina, E. D.; Shirgina, N. V.; Shanin, A. V.
2010-07-01
We examine a two-dimensional model problem of architectural acoustics on sound propagation in a rectangular room with windows. It is supposed that the walls are ideally flat and hard; the windows absorb all energy that falls upon them. We search for the modes of such a room having minimal attenuation indices, which have the expressed structure of billiard trajectories. The main attenuation mechanism for such modes is diffraction at the edges of the windows. We construct estimates for the attenuation indices of the given modes based on the solution to the Weinstein problem. We formulate diffraction problems similar to the statement of the Weinstein problem that describe the attenuation of billiard modes in complex situations.
Phase-space finite elements in a least-squares solution of the transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drumm, C.; Fan, W.; Pautz, S.
2013-07-01
The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less
The Goertler vortex instability mechanism in three-dimensional boundary layers
NASA Technical Reports Server (NTRS)
Hall, P.
1984-01-01
The two dimensional boundary layer on a concave wall is centrifugally unstable with respect to vortices aligned with the basic flow for sufficiently high values of the Goertler number. However, in most situations of practical interest the basic flow is three dimensional and previous theoretical investigations do not apply. The linear stability of the flow over an infinitely long swept wall of variable curvature is considered. If there is no pressure gradient in the boundary layer the instability problem can always be related to an equivalent two dimensional calculation. However, in general, this is not the case and even for small values of the crossflow velocity field dramatic differences between the two and three dimensional problems emerge. When the size of the crossflow is further increased, the vortices in the neutral location have their axes locally perpendicular to the vortex lines of the basic flow.
Berne, Rosalyn W; Raviv, Daniel
2004-04-01
This paper introduces the Eight Dimensional Methodology for Innovative Thinking (the Eight Dimensional Methodology), for innovative problem solving, as a unified approach to case analysis that builds on comprehensive problem solving knowledge from industry, business, marketing, math, science, engineering, technology, arts, and daily life. It is designed to stimulate innovation by quickly generating unique "out of the box" unexpected and high quality solutions. It gives new insights and thinking strategies to solve everyday problems faced in the workplace, by helping decision makers to see otherwise obscure alternatives and solutions. Daniel Raviv, the engineer who developed the Eight Dimensional Methodology, and paper co-author, technology ethicist Rosalyn Berne, suggest that this tool can be especially useful in identifying solutions and alternatives for particular problems of engineering, and for the ethical challenges which arise with them. First, the Eight Dimensional Methodology helps to elucidate how what may appear to be a basic engineering problem also has ethical dimensions. In addition, it offers to the engineer a methodology for penetrating and seeing new dimensions of those problems. To demonstrate the effectiveness of the Eight Dimensional Methodology as an analytical tool for thinking about ethical challenges to engineering, the paper presents the case of the construction of the Large Binocular Telescope (LBT) on Mount Graham in Arizona. Analysis of the case offers to decision makers the use of the Eight Dimensional Methodology in considering alternative solutions for how they can proceed in their goals of exploring space. It then follows that same process through the second stage of exploring the ethics of each of those different solutions. The LBT project pools resources from an international partnership of universities and research institutes for the construction and maintenance of a highly sophisticated, powerful new telescope. It will soon mark the erection of the world's largest and most powerful optical telescope, designed to see fine detail otherwise visible only from space. It also represents a controversial engineering project that is being undertaken on land considered to be sacred by the local, native Apache people. As presented, the case features the University of Virginia, and its challenges in consideration of whether and how to join the LBT project consortium.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Pairing phase diagram of three holes in the generalized Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navarro, O.; Espinosa, J.E.
Investigations of high-{Tc} superconductors suggest that the electronic correlation may play a significant role in the formation of pairs. Although the main interest is on the physic of two-dimensional highly correlated electron systems, the one-dimensional models related to high temperature superconductivity are very popular due to the conjecture that properties of the 1D and 2D variants of certain models have common aspects. Within the models for correlated electron systems, that attempt to capture the essential physics of high-temperature superconductors and parent compounds, the Hubbard model is one of the simplest. Here, the pairing problem of a three electrons system hasmore » been studied by using a real-space method and the generalized Hubbard Hamiltonian. This method includes the correlated hopping interactions as an extension of the previously proposed mapping method, and is based on mapping the correlated many body problem onto an equivalent site- and bond-impurity tight-binding one in a higher dimensional space, where the problem was solved in a non-perturbative way. In a linear chain, the authors analyzed the pairing phase diagram of three correlated holes for different values of the Hamiltonian parameters. For some value of the hopping parameters they obtain an analytical solution for all kind of interactions.« less
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Ermakov's Superintegrable Toy and Nonlocal Symmetries
NASA Astrophysics Data System (ADS)
Leach, P. G. L.; Karasu Kalkanli, A.; Nucci, M. C.; Andriopoulos, K.
2005-11-01
We investigate the symmetry properties of a pair of Ermakov equations. The system is superintegrable and yet possesses only three Lie point symmetries with the algebra sl(2, R). The number of point symmetries is insufficient and the algebra unsuitable for the complete specification of the system. We use the method of reduction of order to reduce the nonlinear fourth-order system to a third-order system comprising a linear second-order equation and a conservation law. We obtain the representation of the complete symmetry group from this system. Four of the required symmetries are nonlocal and the algebra is the direct sum of a one-dimensional Abelian algebra with the semidirect sum of a two-dimensional solvable algebra with a two-dimensional Abelian algebra. The problem illustrates the difficulties which can arise in very elementary systems. Our treatment demonstrates the existence of possible routes to overcome these problems in a systematic fashion.
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Sun, Hokeun; Wang, Shuang
2013-05-30
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
A reduced-order model from high-dimensional frictional hysteresis
Biswas, Saurabh; Chatterjee, Anindya
2014-01-01
Hysteresis in material behaviour includes both signum nonlinearities as well as high dimensionality. Available models for component-level hysteretic behaviour are empirical. Here, we derive a low-order model for rate-independent hysteresis from a high-dimensional massless frictional system. The original system, being given in terms of signs of velocities, is first solved incrementally using a linear complementarity problem formulation. From this numerical solution, to develop a reduced-order model, basis vectors are chosen using the singular value decomposition. The slip direction in generalized coordinates is identified as the minimizer of a dissipation-related function. That function includes terms for frictional dissipation through signum nonlinearities at many friction sites. Luckily, it allows a convenient analytical approximation. Upon solution of the approximated minimization problem, the slip direction is found. A final evolution equation for a few states is then obtained that gives a good match with the full solution. The model obtained here may lead to new insights into hysteresis as well as better empirical modelling thereof. PMID:24910522
The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates
NASA Astrophysics Data System (ADS)
D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier
2018-01-01
This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.
The development of a multi-dimensional gambling accessibility scale.
Hing, Nerilee; Haw, John
2009-12-01
The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang; Wang, Ya-Guang
2008-03-01
Compressible vortex sheets are fundamental waves, along with shocks and rarefaction waves, in entropy solutions to multidimensional hyperbolic systems of conservation laws. Understanding the behavior of compressible vortex sheets is an important step towards our full understanding of fluid motions and the behavior of entropy solutions. For the Euler equations in two-dimensional gas dynamics, the classical linearized stability analysis on compressible vortex sheets predicts stability when the Mach number M > sqrt{2} and instability when M < sqrt{2} ; and Artola and Majda’s analysis reveals that the nonlinear instability may occur if planar vortex sheets are perturbed by highly oscillatory waves even when M > sqrt{2} . For the Euler equations in three dimensions, every compressible vortex sheet is violently unstable and this instability is the analogue of the Kelvin Helmholtz instability for incompressible fluids. The purpose of this paper is to understand whether compressible vortex sheets in three dimensions, which are unstable in the regime of pure gas dynamics, become stable under the magnetic effect in three-dimensional magnetohydrodynamics (MHD). One of the main features is that the stability problem is equivalent to a free-boundary problem whose free boundary is a characteristic surface, which is more delicate than noncharacteristic free-boundary problems. Another feature is that the linearized problem for current-vortex sheets in MHD does not meet the uniform Kreiss Lopatinskii condition. These features cause additional analytical difficulties and especially prevent a direct use of the standard Picard iteration to the nonlinear problem. In this paper, we develop a nonlinear approach to deal with these difficulties in three-dimensional MHD. We first carefully formulate the linearized problem for the current-vortex sheets to show rigorously that the magnetic effect makes the problem weakly stable and establish energy estimates, especially high-order energy estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.
Solving time-dependent two-dimensional eddy current problems
NASA Technical Reports Server (NTRS)
Lee, Min Eig; Hariharan, S. I.; Ida, Nathan
1988-01-01
Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.
NASA Astrophysics Data System (ADS)
Chang, Der-Chen; Markina, Irina; Wang, Wei
2016-09-01
The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Duke Workshop on High-Dimensional Data Sensing and Analysis
2015-05-06
Bayesian sparse factor analysis formulation of Chen et al . ( 2011 ) this work develops multi-label PCA (MLPCA), a generative dimension reduction...version of this problem was recently treated by Banerjee et al . [1], Ravikumar et al . [2], Kolar and Xing [3], and Ho ̈fling and Tibshirani [4]. As...Not applicable. Final Report Duke Workshop on High-Dimensional Data Sensing and Analysis Workshop Dates: July 26-28, 2011
2014-04-01
surrogate model generation is difficult for high -dimensional problems, due to the curse of dimensionality. Variable screening methods have been...a variable screening model was developed for the quasi-molecular treatment of ion-atom collision [16]. In engineering, a confidence interval of...for high -level radioactive waste [18]. Moreover, the design sensitivity method can be extended to the variable screening method because vital
Introduction to the IWA task group on biofilm modeling.
Noguera, D R; Morgenroth, E
2004-01-01
An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.
Spertus, Jacob V; Normand, Sharon-Lise T
2018-04-23
High-dimensional data provide many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems. Propensity score methods are powerful causal inference tools, which are popular in health care research and are particularly useful for high-dimensional data. Recent interest has surrounded a Bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model. We discuss methods for Bayesian propensity score analysis of binary treatments, focusing on modern methods for high-dimensional Bayesian regression and the propagation of uncertainty. We introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions. Through simulations, we show the utility of horseshoe priors and Bayesian additive regression trees paired with our new estimator, while demonstrating the importance of including variance from the treatment regression model. An application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives. As measured by a falsifiability endpoint, we improved confounder adjustment compared with past observational research of the same problem. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Reduced-order prediction of rogue waves in two-dimensional deep-water waves
NASA Astrophysics Data System (ADS)
Sapsis, Themistoklis; Farazmand, Mohammad
2017-11-01
We consider the problem of large wave prediction in two-dimensional water waves. Such waves form due to the synergistic effect of dispersive mixing of smaller wave groups and the action of localized nonlinear wave interactions that leads to focusing. Instead of a direct simulation approach, we rely on the decomposition of the wave field into a discrete set of localized wave groups with optimal length scales and amplitudes. Due to the short-term character of the prediction, these wave groups do not interact and therefore their dynamics can be characterized individually. Using direct numerical simulations of the governing envelope equations we precompute the expected maximum elevation for each of those wave groups. The combination of the wave field decomposition algorithm, which provides information about the statistics of the system, and the precomputed map for the expected wave group elevation, which encodes dynamical information, allows (i) for understanding of how the probability of occurrence of rogue waves changes as the spectrum parameters vary, (ii) the computation of a critical length scale characterizing wave groups with high probability of evolving to rogue waves, and (iii) the formulation of a robust and parsimonious reduced-order prediction scheme for large waves. T.S. has been supported through the ONR Grants N00014-14-1-0520 and N00014-15-1-2381 and the AFOSR Grant FA9550-16-1-0231. M.F. has been supported through the second Grant.
Low-dimensional representations of the three component loop braid group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul; Chang, Liang; Hong, Seung-Moon
2015-11-01
Motivated by physical and topological applications, we study representations of the group LB3 o motions of 3 unlinked oriented circles in R3. Our point of view is to regard the three strand braid group B3 as a subgroup of LB3 and study the problem of extending B3 representations. We introduce the notion of a standard extension and characterize B3 represenations admiting such an extension. In particular we show, using a classification result of Tuba and Wenzl, that every irreducible B3 representation of dimension at most 5 has a (standard) extension. We show that this result is sharp by exhibiting anmore » irreducible 6-dimensional B3 representation that has no extension (standard or otherwise). We obtain complete classifications of (1) irreducible 2-dimensional LB3 representations (2) extensions of irreducible B3 representations and (3) irreducible LB3 representations whose restriction to B3 has abelian image.« less
Aerodynamics of Engine-Airframe Interaction
NASA Technical Reports Server (NTRS)
Caughey, D. A.
1986-01-01
The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
Saad, Laura O; do Rosario, Maria C; Cesar, Raony C; Batistuzzo, Marcelo C; Hoexter, Marcelo Q; Manfro, Gisele G; Shavitt, Roseli G; Leckman, James F; Miguel, Eurípedes C; Alvarenga, Pedro G
2017-05-01
The aims of this study were (1) to assess obsessive-compulsive symptoms (OCS) dimensionally in a school-aged community sample and to correlate them with clinical and demographical variables; (2) to determine a subgroup with significant OCS ("at-risk for OCD") using the Child Behavior Checklist (CBCL-OCS) and (3) to compare it with the rest of the sample; (4) To review the CBCL-OCS subscale properties as a screening tool for pediatric OCD. Data from the Brazilian High Risk Cohort were analyzed. The presence and severity of OCS were assessed through the CBCL-OCS subscale. DSM-IV psychiatric diagnoses were obtained by the Developmental and Well-Being Assessment. Behavioral problems were assessed using the Strengths and Difficulties Questionnaire, the Youth Strengths Inventory, and the CBCL internalizing and externalizing behavior subscales. A total of 2512 (mean age: 8.86 ± 1.84 years; 55.0% male) children were included. Moderate correlations were found between OCS severity and functional impairment (r = 0.36, p < 0.001). Children with higher levels of OCS had higher rates of psychiatric comorbidity and behavioral problems (p < 0.001). A score of 5 or higher in the CBCL-OCS scale determined an "at-risk for OCD" subgroup, comprising 9.7% of the sample (n = 244), with behavioral patterns and psychiatric comorbidities (e.g., tics [odds ratios, OR = 6.41, p < 0.001]), anxiety disorders grouped [OR = 3.68, p < 0.001] and depressive disorders [OR = 3.0, p < 0.001] very similar to those described in OCD. Sensitivity, specificity, positive predictive value, and negative predictive value of the CBCL-OCS for OCD diagnosis were, respectively, 48%, 91.5%; 15.1%, and 98.2%. The dimensional approach suggests that the presence of OCS in children is associated with higher rates of comorbidity, behavioral problems, and impairment. The "at-risk for OCD" group defined by the CBCL revealed a group of patients phenotypically similar to full blown OCD.
Computer aided photographic engineering
NASA Technical Reports Server (NTRS)
Hixson, Jeffrey A.; Rieckhoff, Tom
1988-01-01
High speed photography is an excellent source of engineering data but only provides a two-dimensional representation of a three-dimensional event. Multiple cameras can be used to provide data for the third dimension but camera locations are not always available. A solution to this problem is to overlay three-dimensional CAD/CAM models of the hardware being tested onto a film or photographic image, allowing the engineer to measure surface distances, relative motions between components, and surface variations.
The initial value problem in Lagrangian drift kinetic theory
NASA Astrophysics Data System (ADS)
Burby, J. W.
2016-06-01
> Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.
Macroscopic response in active nonlinear photonic crystals.
Alagappan, Gandhi; John, Sajeev; Li, Er Ping
2013-09-15
We derive macroscopic equations of motion for the slowly varying electric field amplitude in three-dimensional active nonlinear optical nanostructures. We show that the microscopic Maxwell equations and polarization dynamics can be simplified to a macroscopic one-dimensional problem in the direction of group velocity. For a three-level active material, we derive the steady-state equations for normal mode frequency, threshold pumping, nonlinear Bloch mode amplitude, and lasing in photonic crystals. Our analytical results accurately recapture the results of exact numerical methods.
Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.
Ercan, Ali; Kavvas, M Levent
2017-07-25
Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.
High dimensional feature reduction via projection pursuit
NASA Technical Reports Server (NTRS)
Jimenez, Luis; Landgrebe, David
1994-01-01
The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many more spectral intervals than previously possible. An example of that technology is the AVIRIS system, which collects image data in 220 bands. As a result of this, new algorithms must be developed in order to analyze the more complex data effectively. Data in a high dimensional space presents a substantial challenge, since intuitive concepts valid in a 2-3 dimensional space to not necessarily apply in higher dimensional spaces. For example, high dimensional space is mostly empty. This results from the concentration of data in the corners of hypercubes. Other examples may be cited. Such observations suggest the need to project data to a subspace of a much lower dimension on a problem specific basis in such a manner that information is not lost. Projection Pursuit is a technique that will accomplish such a goal. Since it processes data in lower dimensions, it should avoid many of the difficulties of high dimensional spaces. In this paper, we begin the investigation of some of the properties of Projection Pursuit for this purpose.
Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching
NASA Astrophysics Data System (ADS)
Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki
2018-06-01
A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.
Walters, Glenn D; Diamond, Pamela M; Magaletta, Philip R
2010-03-01
Three indicators derived from the Personality Assessment Inventory (PAI) Alcohol Problems scale (ALC)-tolerance/high consumption, loss of control, and negative social and psychological consequences-were subjected to taxometric analysis-mean above minus below a cut (MAMBAC), maximum covariance (MAXCOV), and latent mode factor analysis (L-Mode)-in 1,374 federal prison inmates (905 males, 469 females). Whereas the total sample yielded ambiguous results, the male subsample produced dimensional results, and the female subsample produced taxonic results. Interpreting these findings in light of previous taxometric research on alcohol abuse and dependence it is speculated that while alcohol use disorders may be taxonic in female offenders, they are probably both taxonic and dimensional in male offenders. Two models of male alcohol use disorder in males are considered, one in which the diagnostic features are categorical and the severity of symptomatology is dimensional, and one in which some diagnostic features (e.g., withdrawal) are taxonic and other features (e.g., social problems) are dimensional.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
A Corresponding Lie Algebra of a Reductive homogeneous Group and Its Applications
NASA Astrophysics Data System (ADS)
Zhang, Yu-Feng; Wu, Li-Xin; Rui, Wen-Juan
2015-05-01
With the help of a Lie algebra of a reductive homogeneous space G/K, where G is a Lie group and K is a resulting isotropy group, we introduce a Lax pair for which an expanding (2+1)-dimensional integrable hierarchy is obtained by applying the binormial-residue representation (BRR) method, whose Hamiltonian structure is derived from the trace identity for deducing (2+1)-dimensional integrable hierarchies, which was proposed by Tu, et al. We further consider some reductions of the expanding integrable hierarchy obtained in the paper. The first reduction is just right the (2+1)-dimensional AKNS hierarchy, the second-type reduction reveals an integrable coupling of the (2+1)-dimensional AKNS equation (also called the Davey-Stewartson hierarchy), a kind of (2+1)-dimensional Schrödinger equation, which was once reobtained by Tu, Feng and Zhang. It is interesting that a new (2+1)-dimensional integrable nonlinear coupled equation is generated from the reduction of the part of the (2+1)-dimensional integrable coupling, which is further reduced to the standard (2+1)-dimensional diffusion equation along with a parameter. In addition, the well-known (1+1)-dimensional AKNS hierarchy, the (1+1)-dimensional nonlinear Schrödinger equation are all special cases of the (2+1)-dimensional expanding integrable hierarchy. Finally, we discuss a few discrete difference equations of the diffusion equation whose stabilities are analyzed by making use of the von Neumann condition and the Fourier method. Some numerical solutions of a special stationary initial value problem of the (2+1)-dimensional diffusion equation are obtained and the resulting convergence and estimation formula are investigated. Supported by the Innovation Team of Jiangsu Province hosted by China University of Mining and Technology (2014), the National Natural Science Foundation of China under Grant No. 11371361, the Fundamental Research Funds for the Central Universities (2013XK03), and the Natural Science Foundation of Shandong Province under Grant No. ZR2013AL016
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
Artificial neural network methods in quantum mechanics
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Likas, A.; Fotiadis, D. I.
1997-08-01
In a previous article we have shown how one can employ Artificial Neural Networks (ANNs) in order to solve non-homogeneous ordinary and partial differential equations. In the present work we consider the solution of eigenvalue problems for differential and integrodifferential operators, using ANNs. We start by considering the Schrödinger equation for the Morse potential that has an analytically known solution, to test the accuracy of the method. We then proceed with the Schrödinger and the Dirac equations for a muonic atom, as well as with a nonlocal Schrödinger integrodifferential equation that models the n + α system in the framework of the resonating group method. In two dimensions we consider the well-studied Henon-Heiles Hamiltonian and in three dimensions the model problem of three coupled anharmonic oscillators. The method in all of the treated cases proved to be highly accurate, robust and efficient. Hence it is a promising tool for tackling problems of higher complexity and dimensionality.
Multi-Dimensional, Non-Pyrolyzing Ablation Test Problems
NASA Technical Reports Server (NTRS)
Risch, Tim; Kostyk, Chris
2016-01-01
Non-pyrolyzingcarbonaceous materials represent a class of candidate material for hypersonic vehicle components providing both structural and thermal protection system capabilities. Two problems relevant to this technology are presented. The first considers the one-dimensional ablation of a carbon material subject to convective heating. The second considers two-dimensional conduction in a rectangular block subject to radiative heating. Surface thermochemistry for both problems includes finite-rate surface kinetics at low temperatures, diffusion limited ablation at intermediate temperatures, and vaporization at high temperatures. The first problem requires the solution of both the steady-state thermal profile with respect to the ablating surface and the transient thermal history for a one-dimensional ablating planar slab with temperature-dependent material properties. The slab front face is convectively heated and also reradiates to a room temperature environment. The back face is adiabatic. The steady-state temperature profile and steady-state mass loss rate should be predicted. Time-dependent front and back face temperature, surface recession and recession rate along with the final temperature profile should be predicted for the time-dependent solution. The second problem requires the solution for the transient temperature history for an ablating, two-dimensional rectangular solid with anisotropic, temperature-dependent thermal properties. The front face is radiatively heated, convectively cooled, and also reradiates to a room temperature environment. The back face and sidewalls are adiabatic. The solution should include the following 9 items: final surface recession profile, time-dependent temperature history of both the front face and back face at both the centerline and sidewall, as well as the time-dependent surface recession and recession rate on the front face at both the centerline and sidewall. The results of the problems from all submitters will be collected, summarized, and presented at a later conference.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1991-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.
A numerical algorithm for optimal feedback gains in high dimensional LQR problems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1986-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
Hyper-spectral image segmentation using spectral clustering with covariance descriptors
NASA Astrophysics Data System (ADS)
Kursun, Olcay; Karabiber, Fethullah; Koc, Cemalettin; Bal, Abdullah
2009-02-01
Image segmentation is an important and difficult computer vision problem. Hyper-spectral images pose even more difficulty due to their high-dimensionality. Spectral clustering (SC) is a recently popular clustering/segmentation algorithm. In general, SC lifts the data to a high dimensional space, also known as the kernel trick, then derive eigenvectors in this new space, and finally using these new dimensions partition the data into clusters. We demonstrate that SC works efficiently when combined with covariance descriptors that can be used to assess pixelwise similarities rather than in the high-dimensional Euclidean space. We present the formulations and some preliminary results of the proposed hybrid image segmentation method for hyper-spectral images.
An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures
NASA Technical Reports Server (NTRS)
Cohl, H.
1994-01-01
We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.
Quantum key distribution session with 16-dimensional photonic states.
Etcheverry, S; Cañas, G; Gómez, E S; Nogueira, W A T; Saavedra, C; Xavier, G B; Lima, G
2013-01-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.
Quantum key distribution session with 16-dimensional photonic states
NASA Astrophysics Data System (ADS)
Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.
2013-07-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD.
Quantum key distribution session with 16-dimensional photonic states
Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.
2013-01-01
The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD. PMID:23897033
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Chaos and Robustness in a Single Family of Genetic Oscillatory Networks
Fu, Daniel; Tan, Patrick; Kuznetsov, Alexey; Molkov, Yaroslav I.
2014-01-01
Genetic oscillatory networks can be mathematically modeled with delay differential equations (DDEs). Interpreting genetic networks with DDEs gives a more intuitive understanding from a biological standpoint. However, it presents a problem mathematically, for DDEs are by construction infinitely-dimensional and thus cannot be analyzed using methods common for systems of ordinary differential equations (ODEs). In our study, we address this problem by developing a method for reducing infinitely-dimensional DDEs to two- and three-dimensional systems of ODEs. We find that the three-dimensional reductions provide qualitative improvements over the two-dimensional reductions. We find that the reducibility of a DDE corresponds to its robustness. For non-robust DDEs that exhibit high-dimensional dynamics, we calculate analytic dimension lines to predict the dependence of the DDEs’ correlation dimension on parameters. From these lines, we deduce that the correlation dimension of non-robust DDEs grows linearly with the delay. On the other hand, for robust DDEs, we find that the period of oscillation grows linearly with delay. We find that DDEs with exclusively negative feedback are robust, whereas DDEs with feedback that changes its sign are not robust. We find that non-saturable degradation damps oscillations and narrows the range of parameter values for which oscillations exist. Finally, we deduce that natural genetic oscillators with highly-regular periods likely have solely negative feedback. PMID:24667178
Reference manual for the POISSON/SUPERFISH Group of Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less
Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised
USDA-ARS?s Scientific Manuscript database
In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...
The nonconvex multi-dimensional Riemann problem for Hamilton-Jacobi equations
NASA Technical Reports Server (NTRS)
Bardi, Martino; Osher, Stanley
1991-01-01
Simple inequalities are presented for the viscosity solution of a Hamilton-Jacobi equation in N space dimensions when neither the initial data nor the Hamiltonian need be convex (or concave). The initial data are uniformly Lipschitz and can be written as the sum of a convex function in a group of variables and a concave function in the remaining variables, therefore including the nonconvex Riemann problem. The inequalities become equalities wherever a 'maxmin' equals a 'minmax', and thus a representation formula for this problem is obtained, generalizing the classical Hopi formulas.
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
One-dimensional Gromov minimal filling problem
NASA Astrophysics Data System (ADS)
Ivanov, Alexandr O.; Tuzhilin, Alexey A.
2012-05-01
The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.
An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Han, Jianqiang; Tang, Huazhong
2007-01-01
This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.
Kim, Ye Ji; Kim, Sun Min; Yu, Chunghyeon; Yoo, YoungMin; Cho, Eun Jin; Yang, Jung Woon; Kim, Sung Wng
2017-01-31
Halogenated organic compounds are important anthropogenic chemicals widely used in chemical industry, biology, and pharmacology; however, the persistence and inertness of organic halides cause human health problems and considerable environmental pollution. Thus, the elimination or replacement of halogen atoms with organic halides has been considered a central task in synthetic chemistry. In dehalogenation reactions, the consecutive single-electron transfer from reducing agents generates the radical and corresponding carbanion and thus removes the halogen atom as the leaving group. Herein, we report a new strategy for an efficient chemoselective hydrodehalogenation through the formation of stable carbanion intermediates, which are simply achieved by using highly mobile two-dimensional electrons of inorganic electride [Ca 2 N] + ·e - with effective electron transfer ability. The consecutive single-electron transfer from inorganic electride [Ca 2 N] + ·e - stabilized free carbanions, which is a key step in achieving the selective reaction. Furthermore, a determinant more important than leaving group ability is the stability control of free carbanions according to the s character determined by the backbone structure. We anticipate that this approach may provide new insight into selective chemical formation, including hydrodehalogenation.
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
Sex differences in the behavior of children with the 22q11 deletion syndrome
Sobin, Christina; Kiley-Brabeck, Karen; Monk, Samantha Hadley; Khuri, Jananne; Karayiorgou, Maria
2009-01-01
High rates of psychiatric impairment in adults with 22q11DS suggest that behavioral trajectories of children with 22q11DS may provide critical etiologic insights. Past findings that report DSM diagnoses are extremely variable; moreover sex differences in behavior have not yet been examined. Dimensional CBCL ratings from 82 children, including 51 with the 22q11DS and 31 control siblings were analyzed. Strikingly consistent with rates of psychiatric impairment among affected adults, 25% percent of children with 22q11DS had high CBCL scores for Total Impairment, and 20% had high CBCL Internalizing Scale scores. Males accounted for 90% of high Internalizing scores and 67% of high Total Impairment scores. Attention and Social Problems were ubiquitous; more affected males than females (23% vs. 4%) scored high on Thought Problems. With regard to CBCL/DSM overlap, 20% of affected males as compared with 0 affected females had one or more high CBCL ratings in the absence of a DSM diagnosis. Behaviors of children with 22q11DS are characterized by marked sex differences when rated dimensionally, with significantly more males experiencing Internalizing and Thought Problems. Categorical diagnoses do not reflect behavioral differences between male and female children with 22q11DS, and may miss significant behavior problems in 20% of affected males. PMID:19217670
Robust Multigrid Smoothers for Three Dimensional Elliptic Equations with Strong Anisotropies
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Melson, N. Duane
1998-01-01
We discuss the behavior of several plane relaxation methods as multigrid smoothers for the solution of a discrete anisotropic elliptic model problem on cell-centered grids. The methods compared are plane Jacobi with damping, plane Jacobi with partial damping, plane Gauss-Seidel, plane zebra Gauss-Seidel, and line Gauss-Seidel. Based on numerical experiments and local mode analysis, we compare the smoothing factor of the different methods in the presence of strong anisotropies. A four-color Gauss-Seidel method is found to have the best numerical and architectural properties of the methods considered in the present work. Although alternating direction plane relaxation schemes are simpler and more robust than other approaches, they are not currently used in industrial and production codes because they require the solution of a two-dimensional problem for each plane in each direction. We verify the theoretical predictions of Thole and Trottenberg that an exact solution of each plane is not necessary and that a single two-dimensional multigrid cycle gives the same result as an exact solution, in much less execution time. Parallelization of the two-dimensional multigrid cycles, the kernel of the three-dimensional implicit solver, is also discussed. Alternating-plane smoothers are found to be highly efficient multigrid smoothers for anisotropic elliptic problems.
On l(1): Optimal decentralized performance
NASA Technical Reports Server (NTRS)
Sourlas, Dennis; Manousiouthakis, Vasilios
1993-01-01
In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.
Counting in Lattices: Combinatorial Problems from Statistical Mechanics.
NASA Astrophysics Data System (ADS)
Randall, Dana Jill
In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of California dissertation year fellowship and Esprit working group "RAND". Part of this work was done while visiting ICSI and the University of Edinburgh.
Application of the DMRG in two dimensions: a parallel tempering algorithm
NASA Astrophysics Data System (ADS)
Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian
The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).
Solution methods for one-dimensional viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, John M.; Simitses, George J.
1987-01-01
A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.
Towards a Bernsteinian Language of Description for Mathematics Classroom Discourse
ERIC Educational Resources Information Center
Straehler-Pohl, Hauke; Gellert, Uwe
2013-01-01
This article aims at developing an external language of description to investigate the problem of why particular groups of students are systematically not provided access to school mathematical knowledge. Based on Basil Bernstein's conceptualisation of power in classification, we develop a three-dimensional model that operationalises the…
Nonclassical models of the theory of plates and shells
NASA Astrophysics Data System (ADS)
Annin, Boris D.; Volchkov, Yuri M.
2017-11-01
Publications dealing with the study of methods of reducing a three-dimensional problem of the elasticity theory to a two-dimensional problem of the theory of plates and shells are reviewed. Two approaches are considered: the use of kinematic and force hypotheses and expansion of solutions of the three-dimensional elasticity theory in terms of the complete system of functions. Papers where a three-dimensional problem is reduced to a two-dimensional problem with the use of several approximations of each of the unknown functions (stresses and displacements) by segments of the Legendre polynomials are also reviewed.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
An Exact, Compressible One-Dimensional Riemann Solver for General, Convex Equations of State
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James Russell
2015-03-05
This note describes an algorithm with which to compute numerical solutions to the one- dimensional, Cartesian Riemann problem for compressible flow with general, convex equations of state. While high-level descriptions of this approach are to be found in the literature, this note contains most of the necessary details required to write software for this problem. This explanation corresponds to the approach used in the source code that evaluates solutions for the 1D, Cartesian Riemann problem with a JWL equation of state in the ExactPack package [16, 29]. Numerical examples are given with the proposed computational approach for a polytropic equationmore » of state and for the JWL equation of state.« less
Visual analytics of anomaly detection in large data streams
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.; Sharma, Ratnesh K.; Mehta, Abhay
2009-01-01
Most data streams usually are multi-dimensional, high-speed, and contain massive volumes of continuous information. They are seen in daily applications, such as telephone calls, retail sales, data center performance, and oil production operations. Many analysts want insight into the behavior of this data. They want to catch the exceptions in flight to reveal the causes of the anomalies and to take immediate action. To guide the user in finding the anomalies in the large data stream quickly, we derive a new automated neighborhood threshold marking technique, called AnomalyMarker. This technique is built on cell-based data streams and user-defined thresholds. We extend the scope of the data points around the threshold to include the surrounding areas. The idea is to define a focus area (marked area) which enables users to (1) visually group the interesting data points related to the anomalies (i.e., problems that occur persistently or occasionally) for observing their behavior; (2) discover the factors related to the anomaly by visualizing the correlations between the problem attribute with the attributes of the nearby data items from the entire multi-dimensional data stream. Mining results are quickly presented in graphical representations (i.e., tooltip) for the user to zoom into the problem regions. Different algorithms are introduced which try to optimize the size and extent of the anomaly markers. We have successfully applied this technique to detect data stream anomalies in large real-world enterprise server performance and data center energy management.
Description of a highly symmetric polytope observed in Thomson's problem of charges on a hypersphere
NASA Astrophysics Data System (ADS)
Roth, J.
2007-10-01
In a recent paper, Altschuler and Pérez-Garrido [Phys. Rev. E 76, 016705 (2007)] have presented a four-dimensional polytope with 80 vertices. We demonstrate how this polytope can be derived from the regular four-dimensional 600-cell with 120 vertices if two orthogonal positive disclinations are created. Some related polytopes are also described.
Status and future plans for open source QuickPIC
NASA Astrophysics Data System (ADS)
An, Weiming; Decyk, Viktor; Mori, Warren
2017-10-01
QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git
Modeling the Atmosphere of Solar and Other Stars: Radiative Transfer with PHOENIX/3D
NASA Astrophysics Data System (ADS)
Baron, Edward
The chemical composition of stars is an important ingredient in our understanding of the formation, structure, and evolution of both the Galaxy and the Solar System. The composition of the sun itself is an essential reference standard against which the elemental contents of other astronomical objects are compared. Recently, redetermination of the elemental abundances using three-dimensional, time-dependent hydrodynamical models of the solar atmosphere has led to a reduction in the inferred metal abundances, particularly C, N, O, and Ne. However, this reduction in metals reduces the opacity such that models of the Sun no longer agree with the observed results obtained using helioseismology. Three dimensional (3-D) radiative transfer is an important problem in physics, astrophysics, and meteorology. Radiative transfer is extremely computationally complex and it is a natural problem that requires computation on the exascale. We intend to calculate the detailed compositional structure of the Sun and other stars at high resolution with full NLTE, treating the turbulent velocity flows in full detail in order to compare results from hydrodynamics and helioseismology, and understand the nature of the discrepancies found between the two approaches. We propose to perform 3-D high-resolution radiative transfer calculations with the PHOENIX/3D suite of solar and other stars using 3-D hydrodynamic models from different groups. While NLTE radiative transfer has been treated by the groups doing hydrodynamics, they are necessarily limited in their resolution to the consideration of only a few (4-20) frequency bins, whereas we can calculate full NLTE including thousands of wavelength points, resolving the line profiles, and solving the scattering problem with extremely high angular resolution. The code has been used for the analysis of supernova spectra, stellar and planetary spectra, and for time-dependent modeling of transient objects. PHOENIX/3D runs and scales very well on Cray XC-30 and XC-40 machines (tested up to 100,800 CPU cores) and should scale up to several million cores for large simulations. Non-local problems, particularly radiation hydrodynamics problems, are at the forefront of computational astrophysics and we will share our work with the community. Our research program brings a unified modeling strategy to the results of several disparate groups and thus will provide a unifying framework with which to assess the metal abundance of the stars and the chemical evolution of the galaxy. We will bring together 3-D hydrodynamical models, detailed radiative transfer, and astronomical abundance studies. We will also provide results of interest to the atomic physics and plasma physics communities. Our work will use data from NASA telescopes including the Hubble Space Telescope and the James Webb Space telescope. The ability to work with data from the UV to the far IR is crucial from validating our results. Our work will also extend the exascale computational capabilities, which is a national goal.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S
2008-04-11
A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Three-dimensional printing and pediatric liver disease.
Alkhouri, Naim; Zein, Nizar N
2016-10-01
Enthusiastic physicians and medical researchers are investigating the role of three-dimensional printing in medicine. The purpose of the current review is to provide a concise summary of the role of three-dimensional printing technology as it relates to the field of pediatric hepatology and liver transplantation. Our group and others have recently demonstrated the feasibility of printing three-dimensional livers with identical anatomical and geometrical landmarks to the native liver to facilitate presurgical planning of complex liver surgeries. Medical educators are exploring the use of three-dimensional printed organs in anatomy classes and surgical residencies. Moreover, mini-livers are being developed by regenerative medicine scientist as a way to test new drugs and, eventually, whole livers will be grown in the laboratory to replace organs with end-stage disease solving the organ shortage problem. From presurgical planning to medical education to ultimately the bioprinting of whole organs for transplantation, three-dimensional printing will change medicine as we know in the next few years.
Computation of three-dimensional shock wave and boundary-layer interactions
NASA Technical Reports Server (NTRS)
Hung, C. M.
1985-01-01
Computations of the impingement of an oblique shock wave on a cylinder and a supersonic flow past a blunt fin mounted on a plate are used to study three dimensional shock wave and boundary layer interaction. In the impingement case, the problem of imposing a planar impinging shock as an outer boundary condition is discussed and the details of particle traces in windward and leeward symmetry planes and near the body surface are presented. In the blunt fin case, differences between two dimensional and three dimensional separation are discussed, and the existence of an unique high speed, low pressure region under the separated spiral vortex core is demonstrated. The accessibility of three dimensional separation is discussed.
Very high order discontinuous Galerkin method in elliptic problems
NASA Astrophysics Data System (ADS)
Jaśkowiec, Jan
2017-09-01
The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.
Very high order discontinuous Galerkin method in elliptic problems
NASA Astrophysics Data System (ADS)
Jaśkowiec, Jan
2018-07-01
The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
NASA Astrophysics Data System (ADS)
Willert, Jeffrey; Park, H.; Knoll, D. A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Improving Problem-Solving Skills with the Help of Plane-Space Analogies
ERIC Educational Resources Information Center
Budai, László
2013-01-01
We live our lives in three-dimensional space and encounter geometrical problems (equipment instructions, maps, etc.) every day. Yet there are not sufficient opportunities for high school students to learn geometry. New teaching methods can help remedy this. Specifically our experience indicates that there is great promise for use of geometry…
Function approximation using combined unsupervised and supervised learning.
Andras, Peter
2014-03-01
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
NASA Astrophysics Data System (ADS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range ofmore » physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the orthogonality of the projection matrix by exploiting recent results on the Stiefel manifold, i.e., the manifold of matrices with orthogonal columns. The additional benefit of our probabilistic formulation, is that it allows us to select the dimensionality of the AS via the Bayesian information criterion. We validate our approach by showing that it can discover the right AS in synthetic examples without gradient information using both noiseless and noisy observations. We demonstrate that our method is able to discover the same AS as the classical approach in a challenging one-hundred-dimensional problem involving an elliptic stochastic partial differential equation with random conductivity. Finally, we use our approach to study the effect of geometric and material uncertainties in the propagation of solitary waves in a one dimensional granular system.« less
McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J
2017-09-01
In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.
Greving, Matthew P
2011-10-01
Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.
Natural Scherk-Schwarz theories of the weak scale
García, Isabel Garcia; Howe, Kiel; March-Russell, John
2015-12-01
Natural supersymmetric theories of the weak scale are under growing pressure given present LHC constraints, raising the question of whether untuned supersymmetric (SUSY) solutions to the hierarchy problem are possible. In this paper, we explore a class of 5-dimensional natural SUSY theories in which SUSY is broken by the Scherk-Schwarz mechanism. We pedagogically explain how Scherk-Schwarz elegantly solves the traditional problems of 4-dimensional SUSY theories (based on the MSSM and its many variants) that usually result in an unsettling level of fine-tuning. The minimal Scherk-Schwarz set up possesses novel phenomenology, which we briefly outline. In this study, we show thatmore » achieving the observed physical Higgs mass motivates extra structure that does not significantly affect the level of tuning (always better than ~10%) and we explore three qualitatively different extensions: the addition of extra matter that couples to the Higgs, an extra U(1)' gauge group under which the Higgs is charged and an NMSSM-like solution to the Higgs mass problem.« less
A mean field neural network for hierarchical module placement
NASA Technical Reports Server (NTRS)
Unaltuna, M. Kemal; Pitchumani, Vijay
1992-01-01
This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.
NASA Technical Reports Server (NTRS)
Srivastava, Ashok, N.; Akella, Ram; Diev, Vesselin; Kumaresan, Sakthi Preethi; McIntosh, Dawn M.; Pontikakis, Emmanuel D.; Xu, Zuobing; Zhang, Yi
2006-01-01
This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining techniques to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant importance in the aviation industry. The first problem is that of automatic anomaly discovery about an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described m different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact the special nature of recurring anomalies (very small cluster sizes) requires incorporating new methods and measures to enhance the original approach for anomaly detection. ?& pant 0-
Guide to the Revised Ground-Water Flow and Heat Transport Simulator: HYDROTHERM - Version 3
Kipp, Kenneth L.; Hsieh, Paul A.; Charlton, Scott R.
2008-01-01
The HYDROTHERM computer program simulates multi-phase ground-water flow and associated thermal energy transport in three dimensions. It can handle high fluid pressures, up to 1 ? 109 pascals (104 atmospheres), and high temperatures, up to 1,200 degrees Celsius. This report documents the release of Version 3, which includes various additions, modifications, and corrections that have been made to the original simulator. Primary changes to the simulator include: (1) the ability to simulate unconfined ground-water flow, (2) a precipitation-recharge boundary condition, (3) a seepage-surface boundary condition at the land surface, (4) the removal of the limitation that a specified-pressure boundary also have a specified temperature, (5) a new iterative solver for the linear equations based on a generalized minimum-residual method, (6) the ability to use time- or depth-dependent functions for permeability, (7) the conversion of the program code to Fortran 90 to employ dynamic allocation of arrays, and (8) the incorporation of a graphical user interface (GUI) for input and output. The graphical user interface has been developed for defining a simulation, running the HYDROTHERM simulator interactively, and displaying the results. The combination of the graphical user interface and the HYDROTHERM simulator forms the HYDROTHERM INTERACTIVE (HTI) program. HTI can be used for two-dimensional simulations only. New features in Version 3 of the HYDROTHERM simulator have been verified using four test problems. Three problems come from the published literature and one problem was simulated by another partially saturated flow and thermal transport simulator. The test problems include: transient partially saturated vertical infiltration, transient one-dimensional horizontal infiltration, two-dimensional steady-state drainage with a seepage surface, and two-dimensional drainage with coupled heat transport. An example application to a hypothetical stratovolcano system with unconfined ground-water flow is presented in detail. It illustrates the use of HTI with the combination precipitation-recharge and seepage-surface boundary condition, and functions as a tutorial example problem for the new user.
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Mechanism of Flutter A Theoretical and Experimental Investigation of the Flutter Problem
NASA Technical Reports Server (NTRS)
Theodorsen, Theodore; Garrick, I E
1940-01-01
The results of the basic flutter theory originally devised in 1934 and published as NACA Technical Report no. 496 are presented in a simpler and more complete form convenient for further studies. The paper attempts to facilitate the judgement of flutter problems by a systematic survey of the theoretical effects of the various parameters. A large number of experiments were conducted on cantilever wings, with and without ailerons, in the NACA high-speed wind tunnel for the purpose of verifying the theory and to study its adaptability to three-dimensional problems. The experiments included studies on wing taper ratios, nacelles, attached floats, and external bracings. The essential effects in the transition to the three-dimensional problem have been established. Of particular interest is the existence of specific flutter modes as distinguished from ordinary vibration modes. It is shown that there exists a remarkable agreement between theoretical and experimental results.
Scaling and dimensional analysis of acoustic streaming jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moudjed, B.; Botton, V.; Henry, D.
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with amore » review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.« less
Liu, Hao; Hu, Liangbin; Meng, Ying Shirley; Li, Quan
2013-11-07
A configuration of three-dimensional Ni-Si nanocable array anodes is proposed to overcome the severe volume change problem of Si during the charging-discharging process. In the fabrication process, a simple and low cost electrodeposition is employed to deposit Si instead of the common expansive vapor phase deposition methods. The optimum composite nanocable array electrode achieves a high specific capacity ~1900 mA h g(-1) at 0.05 C. After 100 cycles at 0.5 C, 88% of the initial capacity (~1300 mA h g(-1)) remains, suggesting its good capacity retention ability. The high performance of the composite nanocable electrode is attributed to its excellent adhesion of the active material on the three-dimensional current collector and short ionic/electronic transport pathways during cycling.
Feature Grouping and Selection Over an Undirected Graph.
Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping
2012-01-01
High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Connes, Alain; Kreimer, Dirk
This paper gives a complete selfcontained proof of our result announced in [6] showing that renormalization in quantum field theory is a special instance of a general mathematical procedure of extraction of finite values based on the Riemann-Hilbert problem. We shall first show that for any quantum field theory, the combinatorics of Feynman graphs gives rise to a Hopf algebra which is commutative as an algebra. It is the dual Hopf algebra of the enveloping algebra of a Lie algebra whose basis is labelled by the one particle irreducible Feynman graphs. The Lie bracket of two such graphs is computed from insertions of one graph in the other and vice versa. The corresponding Lie group G is the group of characters of . We shall then show that, using dimensional regularization, the bare (unrenormalized) theory gives rise to a loop
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2006-01-01
A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Can compactifications solve the cosmological constant problem?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali
2016-06-30
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
NASA Astrophysics Data System (ADS)
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin
2018-05-02
Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.
Weather prediction using a genetic memory
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanaerva's sparse distributed memory (SDM) is an associative memory model based on the mathematical properties of high dimensional binary address spaces. Holland's genetic algorithms are a search technique for high dimensional spaces inspired by evolutional processes of DNA. Genetic Memory is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. This architecture is designed to maximize the ability of the system to scale-up to handle real world problems.
Characteristics of Two Groups of Angry Drivers
ERIC Educational Resources Information Center
Deffenbacher, Jerry L.; Filetti, Linda B.; Richards, Tracy L.; Lynch, Rebekah S.; Oetting, Eugene R.
2003-01-01
High anger drivers acknowledging problems with driving anger and interest in counseling (high anger/problem [HP] drivers) were compared with high and low anger drivers not acknowledging problems with driving anger and seeking counseling (high and low/nonproblem [HNP and LNP, respectively] drivers). High anger groups reported more anger while…
NASA Astrophysics Data System (ADS)
Morozov, Oleg I.
2018-06-01
The important unsolved problem in theory of integrable systems is to find conditions guaranteeing existence of a Lax representation for a given PDE. The exotic cohomology of the symmetry algebras opens a way to formulate such conditions in internal terms of the PDE s under the study. In this paper we consider certain examples of infinite-dimensional Lie algebras with nontrivial second exotic cohomology groups and show that the Maurer-Cartan forms of the associated extensions of these Lie algebras generate Lax representations for integrable systems, both known and new ones.
One-Dimensional Fokker-Planck Equation with Quadratically Nonlinear Quasilocal Drift
NASA Astrophysics Data System (ADS)
Shapovalov, A. V.
2018-04-01
The Fokker-Planck equation in one-dimensional spacetime with quadratically nonlinear nonlocal drift in the quasilocal approximation is reduced with the help of scaling of the coordinates and time to a partial differential equation with a third derivative in the spatial variable. Determining equations for the symmetries of the reduced equation are derived and the Lie symmetries are found. A group invariant solution having the form of a traveling wave is found. Within the framework of Adomian's iterative method, the first iterations of an approximate solution of the Cauchy problem are obtained. Two illustrative examples of exact solutions are found.
Decimated Input Ensembles for Improved Generalization
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)
1999-01-01
Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.
Fuzzy support vector machine for microarray imbalanced data classification
NASA Astrophysics Data System (ADS)
Ladayya, Faroh; Purnami, Santi Wulan; Irhamah
2017-11-01
DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
NASA Astrophysics Data System (ADS)
Liu, Zexi; Cohen, Fernand
2017-11-01
We describe an approach for synthesizing a three-dimensional (3-D) face structure from an image or images of a human face taken at a priori unknown poses using gender and ethnicity specific 3-D generic models. The synthesis process starts with a generic model, which is personalized as images of the person become available using preselected landmark points that are tessellated to form a high-resolution triangular mesh. From a single image, two of the three coordinates of the model are reconstructed in accordance with the given image of the person, while the third coordinate is sampled from the generic model, and the appearance is made in accordance with the image. With multiple images, all coordinates and appearance are reconstructed in accordance with the observed images. This method allows for accurate pose estimation as well as face identification in 3-D rendering of a difficult two-dimensional (2-D) face recognition problem into a much simpler 3-D surface matching problem. The estimation of the unknown pose is achieved using the Levenberg-Marquardt optimization process. Encouraging experimental results are obtained in a controlled environment with high-resolution images under a good illumination condition, as well as for images taken in an uncontrolled environment under arbitrary illumination with low-resolution cameras.
Solving quantum optimal control problems using Clebsch variables and Lin constraints
NASA Astrophysics Data System (ADS)
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
A mediational model of self-esteem and social problem-solving in anorexia nervosa.
Paterson, Gillian; Power, Kevin; Collin, Paula; Greirson, David; Yellowlees, Alex; Park, Katy
2011-01-01
Poor problem-solving and low self-esteem are frequently cited as significant factors in the development and maintenance of anorexia nervosa. The current study examines the multi-dimensional elements of these measures and postulates a model whereby self-esteem mediates the relationship between social problems-solving and anorexic pathology and considers the implications of this pathway. Fifty-five inpatients with a diagnosis of anorexia nervosa and 50 non-clinical controls completed three standardised multi-dimensional questionnaires pertaining to social problem-solving, self-esteem and eating pathology. Significant differences were yielded between clinical and non-clinical samples on all measures. Within the clinical group, elements of social problem-solving most significant to anorexic pathology were positive problem orientation, negative problem orientation and avoidance. Components of self-esteem most significant to anorexic pathology were eating, weight and shape concern but not eating restraint. The mediational model was upheld with social problem-solving impacting on anorexic pathology through the existence of low self-esteem. Problem orientation, that is, the cognitive processes of social problem-solving appear to be more significant than problem-solving methods in individuals with anorexia nervosa. Negative perceptions of eating, weight and shape appear to impact on low self-esteem but level of restriction does not. Finally, results indicate that self-esteem is a significant factor in the development and execution of positive or negative social problem-solving in individuals with anorexia nervosa by mediating the relationship between those two variables. Copyright © 2010 John Wiley & Sons, Ltd and Eating Disorders Association.
A holographic model of the Kondo effect
NASA Astrophysics Data System (ADS)
Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Wu, Jackson
2013-12-01
We propose a model of the Kondo effect based on the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, also known as holography. The Kondo effect is the screening of a magnetic impurity coupled anti-ferromagnetically to a bath of conduction electrons at low temperatures. In a (1+1)-dimensional CFT description, the Kondo effect is a renormalization group flow triggered by a marginally relevant (0+1)-dimensional operator between two fixed points with the same Kac-Moody current algebra. In the large- N limit, with spin SU( N) and charge U(1) symmetries, the Kondo effect appears as a (0+1)-dimensional second-order mean-field transition in which the U(1) charge symmetry is spontaneously broken. Our holographic model, which combines the CFT and large- N descriptions, is a Chern-Simons gauge field in (2+1)-dimensional AdS space, AdS 3, dual to the Kac-Moody current, coupled to a holographic superconductor along an AdS 2 sub-space. Our model exhibits several characteristic features of the Kondo effect, including a dynamically generated scale, a resistivity with power-law behavior in temperature at low temperatures, and a spectral flow producing a phase shift. Our holographic Kondo model may be useful for studying many open problems involving impurities, including for example the Kondo lattice problem.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Design applications for supercomputers
NASA Technical Reports Server (NTRS)
Studerus, C. J.
1987-01-01
The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.
NASA Astrophysics Data System (ADS)
Brdar, S.; Seifert, A.
2018-01-01
We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
Assessing the heterogeneity of autism spectrum symptoms in a school population.
Morales-Hidalgo, Paula; Ferrando, Pere J; Canals, Josefa
2018-05-15
The aim of the present study was to assess whether the nature of the main autistic features (i.e., social communication problems and repetitive and restrictive patterns) are better conceptualized as dimensional or categorical in a school population. The study was based on the teacher ratings of two different age groups: 2,585 children between the ages of 10 and 12 (Primary Education; PE) and 2,502 children between the ages of 3 and 5 (Nursery Education; NE) from 60 mainstream schools. The analyses were based on Factor Mixture Analysis, a novel approach that combines dimensional and categorical features and prevents spurious latent classes from appearing. The results provided evidence of the dimensionality of autism spectrum symptoms in a school age population. The distribution of the symptoms was strongly and positively skewed but continuous; and the prevalence of high-risk symptoms for autism spectrum disorders (ASD) and social-pragmatic communication disorder (SCD) was 7.55% of NE children and 8.74% in PE. A categorical separation between SCD and ASD was not supported by our sample. In view of the results, it is necessary to establish clear cut points for detecting and diagnosing autism and to develop specific and reliable tools capable of assessing symptom severity and functional consequences in children with ASD. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. The results of the present study suggest that the distribution of autism spectrum symptoms are continuous and dimensional among school-aged children and thus support the need to establish clear cut-off points for detecting and diagnosing autism. In our sample, the prevalence of high-risk symptoms for autism spectrum disorders and social-pragmatic communication disorder was around 8%. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
A finite element approach for solution of the 3D Euler equations
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.
1986-01-01
Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.
Lee, Dongwook; Seo, Jiwon
2014-01-01
The three-dimensionally networked and layered structure of graphene hydroxide (GH) was investigated. After lengthy immersion in a NaOH solution, most of the epoxy groups in the graphene oxide were destroyed, and more hydroxyl groups were generated, transforming the graphene oxide into graphene hydroxide. Additionally, benzoic acid groups were formed, and the ether groups link the neighboring layers, creating a near-3D structure in the GH. To utilize these unique structural features, electrodes with large pores for use in supercapacitors were fabricated using thermal reduction in vacuum. The reduced GH maintained its layered structure and developed a lot of large of pores between/inside the layers. The GH electrodes exhibited high gravimetric as well as high volumetric capacitance. PMID:25492227
NASA Astrophysics Data System (ADS)
Lee, Dongwook; Seo, Jiwon
2014-12-01
The three-dimensionally networked and layered structure of graphene hydroxide (GH) was investigated. After lengthy immersion in a NaOH solution, most of the epoxy groups in the graphene oxide were destroyed, and more hydroxyl groups were generated, transforming the graphene oxide into graphene hydroxide. Additionally, benzoic acid groups were formed, and the ether groups link the neighboring layers, creating a near-3D structure in the GH. To utilize these unique structural features, electrodes with large pores for use in supercapacitors were fabricated using thermal reduction in vacuum. The reduced GH maintained its layered structure and developed a lot of large of pores between/inside the layers. The GH electrodes exhibited high gravimetric as well as high volumetric capacitance.
Lee, Dongwook; Seo, Jiwon
2014-12-10
The three-dimensionally networked and layered structure of graphene hydroxide (GH) was investigated. After lengthy immersion in a NaOH solution, most of the epoxy groups in the graphene oxide were destroyed, and more hydroxyl groups were generated, transforming the graphene oxide into graphene hydroxide. Additionally, benzoic acid groups were formed, and the ether groups link the neighboring layers, creating a near-3D structure in the GH. To utilize these unique structural features, electrodes with large pores for use in supercapacitors were fabricated using thermal reduction in vacuum. The reduced GH maintained its layered structure and developed a lot of large of pores between/inside the layers. The GH electrodes exhibited high gravimetric as well as high volumetric capacitance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.
Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less
Use of a Three-Dimensional Virtual Environment to Teach Drug-Receptor Interactions
Bracegirdle, Luke; McLachlan, Sarah I.H.; Chapman, Stephen R.
2013-01-01
Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics. Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na+-K+ ATPase, and the nicotinic acetylcholine receptor. Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students’ perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course. Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics. PMID:23459131
Use of a three-dimensional virtual environment to teach drug-receptor interactions.
Richardson, Alan; Bracegirdle, Luke; McLachlan, Sarah I H; Chapman, Stephen R
2013-02-12
Objective. To determine whether using 3-dimensional (3D) technology to teach pharmacy students about the molecular basis of the interactions between drugs and their targets is more effective than traditional lecture using 2-dimensional (2D) graphics.Design. Second-year students enrolled in a 4-year masters of pharmacy program in the United Kingdom were randomly assigned to attend either a 3D or 2D presentation on 3 drug targets, the β-adrenoceptor, the Na(+)-K(+) ATPase, and the nicotinic acetylcholine receptor.Assessment. A test was administered to assess the ability of both groups of students to solve problems that required analysis of molecular interactions in 3D space. The group that participated in the 3D teaching presentation performed significantly better on the test than the group who attended the traditional lecture with 2D graphics. A questionnaire was also administered to solicit students' perceptions about the 3D experience. The majority of students enjoyed the 3D session and agreed that the experience increased their enthusiasm for the course.Conclusions. Viewing a 3D presentation of drug-receptor interactions improved student learning compared to learning from a traditional lecture and 2D graphics.
Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons
NASA Astrophysics Data System (ADS)
Hayden, Lorien; Alemi, Alex; Sethna, James
2014-03-01
Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.
Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.
2010-01-01
A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919
Using a general problem-solving strategy to promote transfer.
Youssef-Shalala, Amina; Ayres, Paul; Schubert, Carina; Sweller, John
2014-09-01
Cognitive load theory was used to hypothesize that a general problem-solving strategy based on a make-as-many-moves-as-possible heuristic could facilitate problem solutions for transfer problems. In four experiments, school students were required to learn about a topic through practice with a general problem-solving strategy, through a conventional problem solving strategy or by studying worked examples. In Experiments 1 and 2 using junior high school students learning geometry, low knowledge students in the general problem-solving group scored significantly higher on near or far transfer tests than the conventional problem-solving group. In Experiment 3, an advantage for a general problem-solving group over a group presented worked examples was obtained on far transfer tests using the same curriculum materials, again presented to junior high school students. No differences between conditions were found in Experiments 1, 2, or 3 using test problems similar to the acquisition problems. Experiment 4 used senior high school students studying economics and found the general problem-solving group scored significantly higher than the conventional problem-solving group on both similar and transfer tests. It was concluded that the general problem-solving strategy was helpful for novices, but not for students that had access to domain-specific knowledge. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Spatial visualization in physics problem solving.
Kozhevnikov, Maria; Motes, Michael A; Hegarty, Mary
2007-07-08
Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naíve students were administered kinematics problems and spatial visualization ability tests. In Study 2, 17 (8 high- and 9 low-spatial ability) additional students completed think-aloud protocols while they solved the kinematics problems. In Study 3, the eye movements of fifteen (9 high- and 6 low-spatial ability) students were recorded while the students solved kinematics problems. In contrast to high-spatial students, most low-spatial students did not combine two motion vectors, were unable to switch frames of reference, and tended to interpret graphs literally. The results of the study suggest an important relationship between spatial visualization ability and solving kinematics problems with multiple spatial parameters. 2007 Cognitive Science Society, Inc.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Support Vector Machines for Hyperspectral Remote Sensing Classification
NASA Technical Reports Server (NTRS)
Gualtieri, J. Anthony; Cromp, R. F.
1998-01-01
The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.
Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem
NASA Astrophysics Data System (ADS)
Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang
2018-03-01
The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Aokage, K.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2015-03-01
Advantages of CT scanners with high resolution have allowed the improved detection of lung cancers. In the recent release of positive results from the National Lung Screening Trial (NLST) in the US showing that CT screening does in fact have a positive impact on the reduction of lung cancer related mortality. While this study does show the efficacy of CT based screening, physicians often face the problems of deciding appropriate management strategies for maximizing patient survival and for preserving lung function. Several key manifold-learning approaches efficiently reveal intrinsic low-dimensional structures latent in high-dimensional data spaces. This study was performed to investigate whether the dimensionality reduction can identify embedded structures from the CT histogram feature of non-small-cell lung cancer (NSCLC) space to improve the performance in predicting the likelihood of RFS for patients with NSCLC.
Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-12-13
In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.
High Dimensional Classification Using Features Annealed Independence Rules.
Fan, Jianqing; Fan, Yingying
2008-01-01
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weseloh, Wayne N.; Clancy, Sean P.; Painter, James W.
2010-08-01
PAGOSA is a computational fluid dynamics computer program developed at Los Alamos National Laboratory (LANL) for the study of high-speed compressible flow and high-rate material deformation. PAGOSA is a three-dimensional Eulerian finite difference code, solving problems with a wide variety of equations of state (EOSs), material strength, and explosive modeling options.
ECAT: A New Computerized Tomographic Imaging System for Position-Emitting Radiopharmaceuticals
DOE R&D Accomplishments Database
Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Kuhl, D. E.
1977-01-01
The ECAT was designed and developed as a complete computerized positron radionuclide imaging system capable of providing high contrast, high resolution, quantitative images in 2 dimensional and tomographic formats. Flexibility, in its various image mode options, allows it to be used for a wide variety of imaging problems.
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
NASA Astrophysics Data System (ADS)
Kogan, Ian I.
We discuss a quantum { U}q [sl(2)] symmetry in the Landau problem, which naturally arises due to the relation between { U}q [sl(2)] and the group of magnetic translations. The latter is connected with W∞ and area-preserving (symplectic) diffeomorphisms which are the canonical transformations in the two-dimensional phase space. We shall discuss the hidden quantum symmetry in a 2 + 1 gauge theory with the Chern-Simons term and in a quantum Hall system, which are both connected with the Landau problem.
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
Two-Dimensional Grammars And Their Applications To Artificial Intelligence
NASA Astrophysics Data System (ADS)
Lee, Edward T.
1987-05-01
During the past several years, the concepts and techniques of two-dimensional grammars1,2 have attracted growing attention as promising avenues of approach to problems in picture generation as well as in picture description3 representation, recognition, transformation and manipulation. Two-dimensional grammar techniques serve the purpose of exploiting the structure or underlying relationships in a picture. This approach attempts to describe a complex picture in terms of their components and their relative positions. This resembles the way a sentence is described in terms of its words and phrases, and the terms structural picture recognition, linguistic picture recognition, or syntactic picture recognition are often used. By using this approach, the problem of picture recognition becomes similar to that of phrase recognition in a language. However, describing pictures using a string grammar (one-dimensional grammar), the only relation between sub-pictures and/or primitives is the concatenation; that is each picture or primitive can be connected only at the left or right. This one-dimensional relation has not been very effective in describing two-dimensional pictures. A natural generaliza-tion is to use two-dimensional grammars. In this paper, two-dimensional grammars and their applications to artificial intelligence are presented. Picture grammars and two-dimensional grammars are introduced and illustrated by examples. In particular, two-dimensional grammars for generating all possible squares and all possible rhombuses are presented. The applications of two-dimensional grammars to solving region filling problems are discussed. An algorithm for region filling using two-dimensional grammars is presented together with illustrative examples. The advantages of using this algorithm in terms of computation time are also stated. A high-level description of a two-level picture generation system is proposed. The first level is the picture primitive generation using two-dimensional grammars. The second level is picture generation using either string description or entity-relationship (ER) diagram description. Illustrative examples are also given. The advantages of ER diagram description together with its comparison to string description are also presented. The results obtained in this paper may have useful applications in artificial intelligence, robotics, expert systems, picture processing, pattern recognition, knowledge engineering and pictorial database design. Furthermore, examples related to satellite surveillance and identifications are also included.
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.
2012-02-01
We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Group invariant solution for a pre-existing fracture driven by a power-law fluid in impermeable rock
NASA Astrophysics Data System (ADS)
Fareo, A. G.; Mason, D. P.
2013-12-01
The effect of power-law rheology on hydraulic fracturing is investigated. The evolution of a two-dimensional fracture with non-zero initial length and driven by a power-law fluid is analyzed. Only fluid injection into the fracture is considered. The surrounding rock mass is impermeable. With the aid of lubrication theory and the PKN approximation a partial differential equation for the fracture half-width is derived. Using a linear combination of the Lie-point symmetry generators of the partial differential equation, the group invariant solution is obtained and the problem is reduced to a boundary value problem for an ordinary differential equation. Exact analytical solutions are derived for hydraulic fractures with constant volume and with constant propagation speed. The asymptotic solution near the fracture tip is found. The numerical solution for general working conditions is obtained by transforming the boundary value problem to a pair of initial value problems. Throughout the paper, hydraulic fracturing with shear thinning, Newtonian and shear thickening fluids are compared.
NASA Astrophysics Data System (ADS)
Katzav, Eytan
2013-04-01
In this paper, a mode of using the Dynamic Renormalization Group (DRG) method is suggested in order to cope with inconsistent results obtained when applying it to a continuous family of one-dimensional nonlocal models. The key observation is that the correct fixed-point dynamical system has to be identified during the analysis in order to account for all the relevant terms that are generated under renormalization. This is well established for static problems, however poorly implemented in dynamical ones. An application of this approach to a nonlocal extension of the Kardar-Parisi-Zhang equation resolves certain problems in one-dimension. Namely, obviously problematic predictions are eliminated and the existing exact analytic results are recovered.
The effect of fertility stress on endometrial and subendometrial blood flow among infertile women.
Dong, Yuezhi; Cai, Yanna; Zhang, Yu; Xing, Yurong; Sun, Yingpu
2017-03-04
To investigate the effect of fertility stress on endometrial and subendometrial blood flow among infertile women. This case-control study was conducted in The First Affiliated Hospital of Zhengzhou University. The fertility problem inventory (FPI) was adopted to evaluate fertility stress. Three-dimensional power Doppler ultrasonography (3D PD-US) was performed during the proliferative phase of the menstrual cycle (days 5-11) to measure endometrial thickness, pattern, endometrial and subendometrial volume (V), the vascularization index (VI), the flow index (FI) and the vascularization-FI (VFI) index. Then, 300 infertile women were separated into two groups (high-score group and low-score group) based on total FPI scores and 80 healthy women were selected as controls. No differences were found among all three groups with regard to general characteristics, endometrial thickness, pattern, endometrial and subendometrial V, VI and VFI. The endometrial and subendometrial FIs associated with different stress levels significantly differed among the three groups (F = 33.95, P < 0.001; F = 44.79, P < 0.001, respectively). The endometrial and subendometrial FIs in the control group were significantly higher than those in the high-score group and low-score groups. The endometrial and subendometrial FIs in the low-score group were significantly higher than those in the high-score group. The total FPI score was closely related to the endometrial and subendometrial FIs (r = -0.304, P < 0.001; r = -0.407, P < 0.001, respectively). Fertility stress was associated with endometrial and subendometrial flow index. Whether fertility stress might affect pregnancy outcome by reducing endometrial and subendometrial blood flow requires further research.
The Role of Motion Concepts in Understanding Non-Motion Concepts
Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando
2017-01-01
This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715
Maximization of Learning Speed Due to Neuronal Redundancy in Reinforcement Learning
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2016-11-01
Adaptable neural activity contributes to the flexibility of human behavior, which is optimized in situations such as motor learning and decision making. Although learning signals in motor learning and decision making are low-dimensional, neural activity, which is very high dimensional, must be modified to achieve optimal performance based on the low-dimensional signal, resulting in a severe credit-assignment problem. Despite this problem, the human brain contains a vast number of neurons, leaving an open question: what is the functional significance of the huge number of neurons? Here, I address this question by analyzing a redundant neural network with a reinforcement-learning algorithm in which the numbers of neurons and output units are N and M, respectively. Because many combinations of neural activity can generate the same output under the condition of N ≫ M, I refer to the index N - M as neuronal redundancy. Although greater neuronal redundancy makes the credit-assignment problem more severe, I demonstrate that a greater degree of neuronal redundancy facilitates learning speed. Thus, in an apparent contradiction of the credit-assignment problem, I propose the hypothesis that a functional role of a huge number of neurons or a huge degree of neuronal redundancy is to facilitate learning speed.
NASA Astrophysics Data System (ADS)
Heuzé, Thomas
2017-10-01
We present in this work two finite volume methods for the simulation of unidimensional impact problems, both for bars and plane waves, on elastic-plastic solid media within the small strain framework. First, an extension of Lax-Wendroff to elastic-plastic constitutive models with linear and nonlinear hardenings is presented. Second, a high order TVD method based on flux-difference splitting [1] and Superbee flux limiter [2] is coupled with an approximate elastic-plastic Riemann solver for nonlinear hardenings, and follows that of Fogarty [3] for linear ones. Thermomechanical coupling is accounted for through dissipation heating and thermal softening, and adiabatic conditions are assumed. This paper essentially focuses on one-dimensional problems since analytical solutions exist or can easily be developed. Accordingly, these two numerical methods are compared to analytical solutions and to the explicit finite element method on test cases involving discontinuous and continuous solutions. This allows to study in more details their respective performance during the loading, unloading and reloading stages. Particular emphasis is also paid to the accuracy of the computed plastic strains, some differences being found according to the numerical method used. Lax-Wendoff two-dimensional discretization of a one-dimensional problem is also appended at the end to demonstrate the extensibility of such numerical scheme to multidimensional problems.
Urben, Sébastien; Habersaat, Stéphanie; Suter, Maya; Pihet, Sandrine; De Ridder, Jill; Stéphan, Philippe
2016-12-01
The current study investigated gender differences in the main components of antisocial behavior in an at-risk versus an offender group of adolescents. One-hundred and forty-three adolescents divided into two different risk groups [at risk (n = 54) and offenders (n = 89)] were compared according to gender (111 boys and 32 girls). Externalizing symptoms were assessed with the Delinquent and Aggressive subscales of the Youth Self-report Questionnaire, internalizing problems with the Beck Anxiety Inventory and the Beck Depressive Inventory and personality traits with the Barratt-Impulsiveness Scale as well as the Youth Psychopathic Traits Inventory. Results revealed a consistent interaction pattern, with girls presenting higher levels of externalizing symptoms, more motor impulsivity and a more arrogant and deceitful interpersonal style than boys in the at-risk group. In contrast, in the offenders' group, psychopathic traits were more present in boys than in girls. Regarding internalizing problems, girls showed more depression than boys, independently of the risk group. Among offending youths, girls present equally severe externalizing problems, and problematic personality traits as boys. At-risk girls have the highest rates of difficulties across the tested domains and should therefore be specifically targeted for prevention and intervention.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS) Severity
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B.; Bringas-Vega, Maria L.; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A.
2018-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented. PMID:29379411
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B; Bringas-Vega, Maria L; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A
2017-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.
NASA Astrophysics Data System (ADS)
Li, Chunfei; Bando, Yoshio; Nakamura, Masaki; Onoda, Mitsuko; Kimizuka, Noboru
1998-09-01
The modulated structures appearing in the homologous compounds InMO3(ZnO)m(M=In, Ga;m=integer) were observed by using a high-resoultion transmission electron microscope and are described based on a four-dimensional superspace group. The electron diffraction patterns for compounds withmlarger than 6 reveal extra spots, indicating the formation of a modulated structure. The subcell structures form=odd and even numbers are assigned to be either monoclinic or orthorhombic, respectively. On the other hand, extra spots can be indexed by one-dimensional modulated structure. The possible space groups for the subcell structure areCm,C2, andC2/mform=odd numbers, while those form=even numbers areCcm21andCcmm, respectively. Then, corresponding possible superspace groups are assigned to bePC2s,PCmoverline1, andPC2/msoverline1for oddmnumbers andPCcm211overline1overline1andPCcmm1overline11for evenmnumbers. Based on the superspace group determination, a structure model for a one-dimensional modulated structure is proposed.
A Dimensionally Reduced Clustering Methodology for Heterogeneous Occupational Medicine Data Mining.
Saâdaoui, Foued; Bertrand, Pierre R; Boudet, Gil; Rouffiac, Karine; Dutheil, Frédéric; Chamoux, Alain
2015-10-01
Clustering is a set of techniques of the statistical learning aimed at finding structures of heterogeneous partitions grouping homogenous data called clusters. There are several fields in which clustering was successfully applied, such as medicine, biology, finance, economics, etc. In this paper, we introduce the notion of clustering in multifactorial data analysis problems. A case study is conducted for an occupational medicine problem with the purpose of analyzing patterns in a population of 813 individuals. To reduce the data set dimensionality, we base our approach on the Principal Component Analysis (PCA), which is the statistical tool most commonly used in factorial analysis. However, the problems in nature, especially in medicine, are often based on heterogeneous-type qualitative-quantitative measurements, whereas PCA only processes quantitative ones. Besides, qualitative data are originally unobservable quantitative responses that are usually binary-coded. Hence, we propose a new set of strategies allowing to simultaneously handle quantitative and qualitative data. The principle of this approach is to perform a projection of the qualitative variables on the subspaces spanned by quantitative ones. Subsequently, an optimal model is allocated to the resulting PCA-regressed subspaces.
NASA Astrophysics Data System (ADS)
Jin, L.; Zoback, M. D.
2017-10-01
We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Propagation in and scattering from a matched metamaterial having a zero index of refraction.
Ziolkowski, Richard W
2004-10-01
Planar metamaterials that exhibit a zero index of refraction have been realized experimentally by several research groups. Their existence stimulated the present investigation, which details the properties of a passive, dispersive metamaterial that is matched to free space and has an index of refraction equal to zero. Thus, unlike previous zero-index investigations, both the permittivity and permeability are zero here at a specified frequency. One-, two-, and three-dimensional source problems are treated analytically. The one- and two-dimensional source problem results are confirmed numerically with finite difference time domain (FDTD) simulations. The FDTD simulator is also used to treat the corresponding one- and two-dimensional scattering problems. It is shown that in both the source and scattering configurations the electromagnetic fields in a matched zero-index medium take on a static character in space, yet remain dynamic in time, in such a manner that the underlying physics remains associated with propagating fields. Zero phase variation at various points in the zero-index medium is demonstrated once steady-state conditions are obtained. These behaviors are used to illustrate why a zero-index metamaterial, such as a zero-index electromagnetic band-gap structured medium, significantly narrows the far-field pattern associated with an antenna located within it. They are also used to show how a matched zero-index slab could be used to transform curved wave fronts into planar ones.
Modal ring method for the scattering of sound
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.
NASA Astrophysics Data System (ADS)
Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.
2016-06-01
High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3 + 1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.
The Scaling Group of the 1-D Invisicid Euler Equations
NASA Astrophysics Data System (ADS)
Schmidt, Emma; Ramsey, Scott; Boyd, Zachary; Baty, Roy
2017-11-01
The one dimensional (1-D) compressible Euler equations in non-ideal media support scale invariant solutions under a variety of initial conditions. Famous scale invariant solutions include the Noh, Sedov, Guderley, and collapsing cavity hydrodynamic test problems. We unify many classical scale invariant solutions under a single scaling group analysis. The scaling symmetry group generator provides a framework for determining all scale invariant solutions emitted by the 1-D Euler equations for arbitrary geometry, initial conditions, and equation of state. We approach the Euler equations from a geometric standpoint, and conduct scaling analyses for a broad class of materials.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
The dimension split element-free Galerkin method for three-dimensional potential problems
NASA Astrophysics Data System (ADS)
Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.
2018-06-01
This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.
Tachyonic instabilities in 2 + 1 dimensional Yang-Mills theory and its connection to number theory
NASA Astrophysics Data System (ADS)
Chamizo, Fernando; González-Arroyo, Antonio
2017-06-01
We consider the 2 + 1 dimensional Yang-Mills theory with gauge group {{SU}}(N) on a flat 2-torus under twisted boundary conditions. We study the possibility of phase transitions (tachyonic instabilities) when N and the volume vary and certain chromomagnetic flux associated to the topology of the bundle can be adjusted. Under natural assumptions about how to match the perturbative regime and the expected confinement, we prove that the absence of tachyonic instabilities is related to some problems in number theory, namely the Diophantine approximation of irreducible fractions by other fractions of smaller denominator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, T.; Rabitz, H.
1996-02-01
A general interpolation method for constructing smooth molecular potential energy surfaces (PES{close_quote}s) from {ital ab} {ital initio} data are proposed within the framework of the reproducing kernel Hilbert space and the inverse problem theory. The general expression for an {ital a} {ital posteriori} error bound of the constructed PES is derived. It is shown that the method yields globally smooth potential energy surfaces that are continuous and possess derivatives up to second order or higher. Moreover, the method is amenable to correct symmetry properties and asymptotic behavior of the molecular system. Finally, the method is generic and can be easilymore » extended from low dimensional problems involving two and three atoms to high dimensional problems involving four or more atoms. Basic properties of the method are illustrated by the construction of a one-dimensional potential energy curve of the He{endash}He van der Waals dimer using the exact quantum Monte Carlo calculations of Anderson {ital et} {ital al}. [J. Chem. Phys. {bold 99}, 345 (1993)], a two-dimensional potential energy surface of the HeCO van der Waals molecule using recent {ital ab} {ital initio} calculations by Tao {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 8680 (1994)], and a three-dimensional potential energy surface of the H{sup +}{sub 3} molecular ion using highly accurate {ital ab} {ital initio} calculations of R{umlt o}hse {ital et} {ital al}. [J. Chem. Phys. {bold 101}, 2231 (1994)]. In the first two cases the constructed potentials clearly exhibit the correct asymptotic forms, while in the last case the constructed potential energy surface is in excellent agreement with that constructed by R{umlt o}hse {ital et} {ital al}. using a low order polynomial fitting procedure. {copyright} {ital 1996 American Institute of Physics.}« less
High-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)
2010-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
Physical Simulation for Probabilistic Motion Tracking
2008-01-01
learn a low- dimensional embedding of the high-dimensional kinematic data and then attempt to solve the problem in this more man- ageable low...rotations and foot skate ). Such artifacts can be attributed to the general lack of physically plausible priors [2] (that can account for static and/or...temporal priors of the form p(xf+1|xf ) = N (xf + γf ,Σ) (where γf is scaled velocity learned or inferred), have also been proposed [13] and shown to
Music Taste Groups and Problem Behavior.
Mulder, Juul; Bogt, Tom Ter; Raaijmakers, Quinten; Vollebergh, Wilma
2007-04-01
Internalizing and externalizing problems differ by musical tastes. A high school-based sample of 4159 adolescents, representative of Dutch youth aged 12 to 16, reported on their personal and social characteristics, music preferences and social-psychological functioning, measured with the Youth Self-Report (YSR). Cluster analysis on their music preferences revealed six taste groups: Middle-of-the-road (MOR) listeners, Urban fans, Exclusive Rock fans, Rock-Pop fans, Elitists, and Omnivores. A seventh group of musically Low-Involved youth was added. Multivariate analyses revealed that when gender, age, parenting, school, and peer variables were controlled, Omnivores and fans within the Exclusive Rock groups showed relatively high scores on internalizing YSR measures, and social, thought and attention problems. Omnivores, Exclusive Rock, Rock-Pop and Urban fans reported more externalizing problem behavior. Belonging to the MOR group that highly appreciates the most popular, chart-based pop music appears to buffer problem behavior. Music taste group membership uniquely explains variance in both internalizing and externalizing problem behavior.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Van den Akker, Alithe L; Prinzie, Peter; Deković, Maja; De Haan, Amaranta D; Asscher, Jessica J; Widiger, Thomas
2013-12-01
This study investigated the development of personality extremity (deviation of an average midpoint of all 5 personality dimensions together) across childhood and adolescence, as well as relations between personality extremity and adjustment problems. For 598 children (mean age at Time 1 = 7.5 years), mothers and fathers reported the Big Five personality dimensions 4 times across 8 years. Children's vector length in a 5-dimensional configuration of the Big Five dimensions represented personality extremity. Mothers, fathers, and teachers reported children's internalizing and externalizing problems at the 1st and final measurement. In a cohort-sequential design, we modeled personality extremity in children and adolescents from ages 6 to 17 years. Growth mixture modeling revealed a similar solution for both mother and father reports: a large group with relatively short vectors that were stable over time (mother reports: 80.3%; father reports: 84.7%) and 2 smaller groups with relatively long vectors (i.e., extreme personality configuration). One group started out relatively extreme and decreased over time (mother reports: 13.2%; father reports: 10.4%), whereas the other group started out only slightly higher than the short vector group but increased across time (mother reports: 6.5%; father reports: 4.9%). Children who belonged to the increasingly extreme class experienced more internalizing and externalizing problems in late adolescence, controlling for previous levels of adjustment problems and the Big Five personality dimensions. Personality extremity may be important to consider when identifying children at risk for adjustment problems. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2009-05-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2010-01-01
Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
NASA Astrophysics Data System (ADS)
Sudicky, E. A.; Unger, A. J. A.; Lacombe, S.
1995-02-01
A noniterative algorithm for handling prescribed well bore boundary conditions while pumping or injecting fluid in a three-dimensional heterogeneous aquifer is described. The algorithm is formulated by superimposing conductive one-dimensional line elements representing the well screen onto the three-dimensional matrix elements epresenting the aquifer. Storage in the well casing is also naturally accommodated by the superposition of the line elements. The numerical algorithm is verified by comparison with results obtained from the solution of Papadopulos and Cooper (1967). A large-scale example problem involving groundwater extraction from a partially penetrating pumping well located in a highly heterogeneous confined aquifer is presented to demonstrate the utility of the approach.
[Advances in the research of application of collagen in three-dimensional bioprinting].
Li, H H; Luo, P F; Sheng, J J; Liu, G C; Zhu, S H
2016-10-20
As a new industrial technology with characteristics of high precision and accuracy, the application of three-dimensional bioprinting technology is increasingly wide in the field of medical research. Collagen is one of the most common ingredients in tissue, and it has good biological material properties. There are many reports of using collagen as main composition of " ink" of three-dimensional bioprinting technology. However, the applied collagen is mainly from heterogeneous sources, which may cause some problems in application. Recombinant human source collagen can be obtained from microorganism fermentation by transgenic technology, but more research should be done to confirm its property. This article reviews the advances in the research of collagen and its biological application in three-dimensional bioprinting.
Solution of the two-dimensional spectral factorization problem
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1985-01-01
An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.
NASA Technical Reports Server (NTRS)
Jameson, Antony
1994-01-01
The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.
Are strategies in physics discrete? A remote controlled investigation
NASA Astrophysics Data System (ADS)
Heck, Robert; Sherson, Jacob F.; www. scienceathome. org Team; players Team
2017-04-01
In science, strategies are formulated based on observations, calculations, or physical insight. For any given physical process, often several distinct strategies are identified. Are these truly distinct or simply low dimensional representations of a high dimensional continuum of solutions? Our online citizen science platform www.scienceathome.org used by more than 150,000 people recently enabled finding solutions to fast, 1D single atom transport [Nature2016]. Surprisingly, player trajectories bunched into discrete solution strategies (clans) yielding clear, distinct physical insight. Introducing the multi-dimensional vector in the direction of other local maxima we locate narrow, high-yield ``bridges'' connecting the clans. This demonstrates for this problem that a continuum of solutions with no clear physical interpretation does in fact exist. Next, four distinct strategies for creating Bose-Einstein condensates were investigated experimentally: hybrid and crossed dipole trap configurations in combination with either large volume or dimple loading from a magnetic trap. We find that although each conventional strategy appears locally optimal, ``bridges'' can be identified. In a novel approach, the problem was gamified allowing 750 citizen scientists to contribute to the experimental optimization yielding nearly a factor two improvement in atom number.
Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark
2015-01-01
Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed most existing the-state-of-the-art random forests. The top 25 SNPs in Parkinson data set were identified by the proposed model including four interesting genes associated with neurological disorders. The presented approach has shown to be effective in selecting informative sub-groups of SNPs potentially associated with diseases that traditional statistical approaches might fail. The new RF works well for the data where the number of case-control objects is much smaller than the number of SNPs, which is a typical problem in gene data and GWAS. Experiment results demonstrated the effectiveness of the proposed RF model that outperformed the state-of-the-art RFs, including Breiman's RF, GRRF and wsRF methods.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.
2010-01-01
SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468
Children's Strategies for Solving Two- and Three-Dimensional Combinatorial Problems.
ERIC Educational Resources Information Center
English, Lyn D.
1993-01-01
Investigated strategies that 7- to 12-year-old children (n=96) spontaneously applied in solving novel combinatorial problems. With experience in solving two-dimensional problems, children were able to refine their strategies and adapt them to three dimensions. Results on some problems indicated significant effects of age. (Contains 32 references.)…
Concave 1-norm group selection
Jiang, Dingfeng; Huang, Jian
2015-01-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206
A multi-dimensional model of groupwork for adolescent girls who have been sexually abused.
Lindon, J; Nourse, C A
1994-04-01
This paper describes a treatment approach for sexually abused adolescent girls using a group work model. The model incorporates three treatment modalities: a skills component, a psychotherapeutic component, and an educative component. The group ran for 16 sessions over a 6-month period and each girl was assessed prior to joining the group. The girls were again assessed at the end of treatment and a 6-months follow-up; all of them showed improvement on self-statements (outcome) and on behavioral measures assessed by others (follow-up). Girls who had been sexually abused demonstrated difficulties in many areas of their lives following abuse. These problems related to their feelings of guilt and helplessness in relation to both themselves and their abuser. Sexually abused children often have poor knowledge of sexual matters and demonstrate confusion over their own body image. Using a multidimensional model the problems following abuse can be addressed.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Node-Based Learning of Multiple Gaussian Graphical Models
Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In
2014-01-01
We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137
NASA Astrophysics Data System (ADS)
Giovanis, D. G.; Shields, M. D.
2018-07-01
This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
NASA Astrophysics Data System (ADS)
Hernawati, Kuswari; Insani, Nur; Bambang S. H., M.; Nur Hadi, W.; Sahid
2017-08-01
This research aims to mapping the 33 (thirty-three) provinces in Indonesia, based on the data on air, water and soil pollution, as well as social demography and geography data, into a clustered model. The method used in this study was unsupervised method that combines the basic concept of Kohonen or Self-Organizing Feature Maps (SOFM). The method is done by providing the design parameters for the model based on data related directly/ indirectly to pollution, which are the demographic and social data, pollution levels of air, water and soil, as well as the geographical situation of each province. The parameters used consists of 19 features/characteristics, including the human development index, the number of vehicles, the availability of the plant's water absorption and flood prevention, as well as geographic and demographic situation. The data used were secondary data from the Central Statistics Agency (BPS), Indonesia. The data are mapped into SOFM from a high-dimensional vector space into two-dimensional vector space according to the closeness of location in term of Euclidean distance. The resulting outputs are represented in clustered grouping. Thirty-three provinces are grouped into five clusters, where each cluster has different features/characteristics and level of pollution. The result can used to help the efforts on prevention and resolution of pollution problems on each cluster in an effective and efficient way.
A deep learning framework for causal shape transformation.
Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik
2018-02-01
Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pierce, S. A.
2014-12-01
Geosciences are becoming increasingly data intensive, particularly in relation to sustainability problems, which are multi-dimensional, weakly structured and characterized by high levels of uncertainty. In the case of complex resource management problems, the challenge is to extract meaningful information from data and make sense of it. Simultaneously, scientific knowledge alone is insufficient to change practice. Creating tools, and group decision support processes for end users to interact with data are key challenges to transforming science-based information into actionable knowledge. The ENCOMPASS project began as a multi-year case study in the Atacama Desert of Chile to design and implement a knowledge transfer model for energy-water-mining conflicts in the region. ENCOMPASS combines the use of cyberinfrastructure (CI), automated data collection, interactive interfaces for dynamic decision support, and participatory modelling to support social learning. A pilot version of the ENCOMPASS CI uses open source systems and serves as a structure to integrate and store multiple forms of data and knowledge, such as DEM, meteorological, water quality, geomicrobiological, energy demand, and groundwater models. In the case study, informatics and data fusion needs related to scientific uncertainty around deep groundwater flowpaths and energy-water connections. Users may upload data from field sites with handheld devices or desktops. Once uploaded, data assets are accessible for a variety of uses. To address multi-attributed decision problems in the Atacama region a standalone application with touch-enabled interfaces was created to improve real-time interactions with datasets by groups. The tool was used to merge datasets from the ENCOMPASS CI to support exploration among alternatives and build shared understanding among stakeholders. To date, the project has increased technical capacity among stakeholders, resulted in the creation of both for-profit and non-profit entities, enabled cross-sector collaboration with mining-indigenous stakeholders, and produced an interactive application for group decision support. ENCOMPASS leverages advances in computational tools to deliver data and models for group decision support applied to sustainability science problems.
A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu
We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less
Application of neural networks to group technology
NASA Astrophysics Data System (ADS)
Caudell, Thomas P.; Smith, Scott D. G.; Johnson, G. C.; Wunsch, Donald C., II
1991-08-01
Adaptive resonance theory (ART) neural networks are being developed for application to the industrial engineering problem of group technology--the reuse of engineering designs. Two- and three-dimensional representations of engineering designs are input to ART-1 neural networks to produce groups or families of similar parts. These representations, in their basic form, amount to bit maps of the part, and can become very large when the part is represented in high resolution. This paper describes an enhancement to an algorithmic form of ART-1 that allows it to operate directly on compressed input representations and to generate compressed memory templates. The performance of this compressed algorithm is compared to that of the regular algorithm on real engineering designs and a significant savings in memory storage as well as a speed up in execution is observed. In additions, a `neural database'' system under development is described. This system demonstrates the feasibility of training an ART-1 network to first cluster designs into families, and then to recall the family when presented a similar design. This application is of large practical value to industry, making it possible to avoid duplication of design efforts.
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble
Liu, Hang; Chu, Renzhi; Tang, Zhenan
2015-01-01
Sensor drift is the most challenging problem in gas sensing at present. We propose a novel two-dimensional classifier ensemble strategy to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. This strategy is appropriate for multi-class classifiers that consist of combinations of pairwise classifiers, such as support vector machines. We compare the performance of the strategy with those of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the two-dimensional ensemble outperforms the other methods considered. Furthermore, we propose a pre-aging process inspired by that applied to the sensors to improve the stability of the classifier ensemble. The experimental results demonstrate that the weight of each multi-class classifier model in the ensemble remains fairly static before and after the addition of new classifier models to the ensemble, when a pre-aging procedure is applied. PMID:25942640
Calculation of flow about posts and powerhead model. [space shuttle main engine
NASA Technical Reports Server (NTRS)
Anderson, P. G.; Farmer, R. C.
1985-01-01
A three dimensional analysis of the non-uniform flow around the liquid oxygen (LOX) posts in the Space Shuttle Main Engine (SSME) powerhead was performed to determine possible factors contributing to the failure of the posts. Also performed was three dimensional numerical fluid flow analysis of the high pressure fuel turbopump (HPFTP) exhaust system, consisting of the turnaround duct (TAD), two-duct hot gas manifold (HGM), and the Version B transfer ducts. The analysis was conducted in the following manner: (1) modeling the flow around a single and small clusters (2 to 10) of posts; (2) modeling the velocity field in the cross plane; and (3) modeling the entire flow region with a three dimensional network type model. Shear stress functions which will permit viscous analysis without requiring excessive numbers of computational grid points were developed. These wall functions, laminar and turbulent, have been compared to standard Blasius solutions and are directly applicable to the cylinder in cross flow class of problems to which the LOX post problem belongs.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.
1999-10-14
Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giorda, Paolo; Zanardi, Paolo; Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
We analyze the dynamical-algebraic approach to universal quantum control introduced in P. Zanardi and S. Lloyd, e-print quant-ph/0305013. The quantum state space H encoding information decomposes into irreducible sectors and subsystems associated with the group of available evolutions. If this group coincides with the unitary part of the group algebra CK of some group K then universal control is achievable over the K-irreducible components of H. This general strategy is applied to different kinds of bosonic systems. We first consider massive bosons in a double well and show how to achieve universal control over all finite-dimensional Fock sectors. We thenmore » discuss a multimode massless case giving the conditions for generating the whole infinite-dimensional multimode Heisenberg-Weyl enveloping algebra. Finally we show how to use an auxiliary bosonic mode coupled to finite-dimensional systems to generate high-order nonlinearities needed for universal control.« less
Computational Studies of Strongly Correlated Quantum Matter
NASA Astrophysics Data System (ADS)
Shi, Hao
The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.
Grid-converged solution and analysis of the unsteady viscous flow in a two-dimensional shock tube
NASA Astrophysics Data System (ADS)
Zhou, Guangzhao; Xu, Kun; Liu, Feng
2018-01-01
The flow in a shock tube is extremely complex with dynamic multi-scale structures of sharp fronts, flow separation, and vortices due to the interaction of the shock wave, the contact surface, and the boundary layer over the side wall of the tube. Prediction and understanding of the complex fluid dynamics are of theoretical and practical importance. It is also an extremely challenging problem for numerical simulation, especially at relatively high Reynolds numbers. Daru and Tenaud ["Evaluation of TVD high resolution schemes for unsteady viscous shocked flows," Comput. Fluids 30, 89-113 (2001)] proposed a two-dimensional model problem as a numerical test case for high-resolution schemes to simulate the flow field in a square closed shock tube. Though many researchers attempted this problem using a variety of computational methods, there is not yet an agreed-upon grid-converged solution of the problem at the Reynolds number of 1000. This paper presents a rigorous grid-convergence study and the resulting grid-converged solutions for this problem by using a newly developed, efficient, and high-order gas-kinetic scheme. Critical data extracted from the converged solutions are documented as benchmark data. The complex fluid dynamics of the flow at Re = 1000 are discussed and analyzed in detail. Major phenomena revealed by the numerical computations include the downward concentration of the fluid through the curved shock, the formation of the vortices, the mechanism of the shock wave bifurcation, the structure of the jet along the bottom wall, and the Kelvin-Helmholtz instability near the contact surface. Presentation and analysis of those flow processes provide important physical insight into the complex flow physics occurring in a shock tube.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajian, Amir; Alvarez, Marcelo A.; Bond, J. Richard, E-mail: ahajian@cita.utoronto.ca, E-mail: malvarez@cita.utoronto.ca, E-mail: bond@cita.utoronto.ca
Making mock simulated catalogs is an important component of astrophysical data analysis. Selection criteria for observed astronomical objects are often too complicated to be derived from first principles. However the existence of an observed group of objects is a well-suited problem for machine learning classification. In this paper we use one-class classifiers to learn the properties of an observed catalog of clusters of galaxies from ROSAT and to pick clusters from mock simulations that resemble the observed ROSAT catalog. We show how this method can be used to study the cross-correlations of thermal Sunya'ev-Zeldovich signals with number density maps ofmore » X-ray selected cluster catalogs. The method reduces the bias due to hand-tuning the selection function and is readily scalable to large catalogs with a high-dimensional space of astrophysical features.« less
Teaching the Falling Ball Problem with Dimensional Analysis
ERIC Educational Resources Information Center
Sznitman, Josué; Stone, Howard A.; Smits, Alexander J.; Grotberg, James B.
2013-01-01
Dimensional analysis is often a subject reserved for students of fluid mechanics. However, the principles of scaling and dimensional analysis are applicable to various physical problems, many of which can be introduced early on in a university physics curriculum. Here, we revisit one of the best-known examples from a first course in classic…
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less
A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.
This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of themore » hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less
Fast generation of Fresnel holograms based on multirate filtering.
Tsang, Peter; Liu, Jung-Ping; Cheung, Wai-Keung; Poon, Ting-Chung
2009-12-01
One of the major problems in computer-generated holography is the high computation cost involved for the calculation of fringe patterns. Recently, the problem has been addressed by imposing a horizontal parallax only constraint whereby the process can be simplified to the computation of one-dimensional sublines, each representing a scan plane of the object scene. Subsequently the sublines can be expanded to a two-dimensional hologram through multiplication with a reference signal. Furthermore, economical hardware is available with which sublines can be generated in a computationally free manner with high throughput of approximately 100 M pixels/second. Apart from decreasing the computation loading, the sublines can be treated as intermediate data that can be compressed by simply downsampling the number of sublines. Despite these favorable features, the method is suitable only for the generation of white light (rainbow) holograms, and the resolution of the reconstructed image is inferior to the classical Fresnel hologram. We propose to generate holograms from one-dimensional sublines so that the above-mentioned problems can be alleviated. However, such an approach also leads to a substantial increase in computation loading. To overcome this problem we encapsulated the conversion of sublines to holograms as a multirate filtering process and implemented the latter by use of a fast Fourier transform. Evaluation reveals that, for holograms of moderate size, our method is capable of operating 40,000 times faster than the calculation of Fresnel holograms based on the precomputed table lookup method. Although there is no relative vertical parallax between object points at different distance planes, a global vertical parallax is preserved for the object scene as a whole and the reconstructed image can be observed easily.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Health Information Retrieval Tool (HIRT)
Nyun, Mra Thinzar; Ogunyemi, Omolola; Zeng, Qing
2002-01-01
The World Wide Web (WWW) is a powerful way to deliver on-line health information, but one major problem limits its value to consumers: content is highly distributed, while relevant and high quality information is often difficult to find. To address this issue, we experimented with an approach that utilizes three-dimensional anatomic models in conjunction with free-text search.
Comment on "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit".
Carrillo-Bernal, M A; Núñez-Yépez, H N; Salas-Brito, A L; Solis, Didier A
2015-02-01
In the referred paper, the authors use a numerical method for solving ordinary differential equations and a softened Coulomb potential -1/√[x(2)+β(2)] to study the one-dimensional Coulomb problem by approaching the parameter β to zero. We note that even though their numerical findings in the soft potential scenario are correct, their conclusions do not extend to the one-dimensional Coulomb problem (β=0). Their claims regarding the possible existence of an even ground state with energy -∞ with a Dirac-δ eigenfunction and of well-defined parity eigenfunctions in the one-dimensional hydrogen atom are questioned.
Two-dimensional supersonic nonlinear Schrödinger flow past an extended obstacle
NASA Astrophysics Data System (ADS)
El, G. A.; Kamchatnov, A. M.; Khodorovskii, V. V.; Annibale, E. S.; Gammal, A.
2009-10-01
Supersonic flow of a superfluid past a slender impenetrable macroscopic obstacle is studied in the framework of the two-dimensional (2D) defocusing nonlinear Schrödinger (NLS) equation. This problem is of fundamental importance as a dispersive analog of the corresponding classical gas-dynamics problem. Assuming the oncoming flow speed is sufficiently high, we asymptotically reduce the original boundary-value problem for a steady flow past a slender body to the one-dimensional dispersive piston problem described by the nonstationary NLS equation, in which the role of time is played by the stretched x coordinate and the piston motion curve is defined by the spatial body profile. Two steady oblique spatial dispersive shock waves (DSWs) spreading from the pointed ends of the body are generated in both half planes. These are described analytically by constructing appropriate exact solutions of the Whitham modulation equations for the front DSW and by using a generalized Bohr-Sommerfeld quantization rule for the oblique dark soliton fan in the rear DSW. We propose an extension of the traditional modulation description of DSWs to include the linear “ship-wave” pattern forming outside the nonlinear modulation region of the front DSW. Our analytic results are supported by direct 2D unsteady numerical simulations and are relevant to recent experiments on Bose-Einstein condensates freely expanding past obstacles.
Benchmark problems in computational aeroacoustics
NASA Technical Reports Server (NTRS)
Porter-Locklear, Freda
1994-01-01
A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).
NASA Astrophysics Data System (ADS)
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
Several parameters of certain three-dimensional semiconductor devices including diodes, transistors, and solar cells can be determined without solving the actual boundary-value problem. The recombination current, transit time, and open-circuit voltage of planar diodes are emphasized here. The resulting analytical expressions enable determination of the surface recombination velocity of shallow planar diodes. The method involves introducing corresponding one-dimensional models having the same values of these parameters.
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
High-pressure synthesis and characterization of the first cerium fluoride borate CeB{sub 2}O{sub 4}F
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinteregger, Ernst; Wurst, Klaus; Tribus, Martina
2013-08-15
CeB{sub 2}O{sub 4}F is the first cerium fluoride borate, which is exclusively built up of one-dimensional, infinite chains of condensed trigonal-planar [BO{sub 3}]{sup 3−} groups. This new cerium fluoride borate was synthesized under high-pressure/high-temperature conditions of 0.9 GPa and 1450 °C in a Walker-type multianvil apparatus. The compound crystallizes in the orthorhombic space group Pbca (No. 61) with eight formula units and the lattice parameters a=821.63(5), b=1257.50(9), c=726.71(6) pm, V=750.84(9) Å{sup 3}, R{sub 1}=0.0698, and wR{sub 2}=0.0682 (all data). The structure exhibits a 9+1 coordinated cerium ion, one three-fold coordinated fluoride ion and a one-dimensional chain of [BO{sub 3}]{sup 3−}more » groups. Furthermore, IR spectroscopy, Electron Micro Probe Analysis and temperature-dependent X-ray powder diffraction measurements were performed. - Graphical abstract: A new rare-earth fluoride borate CeB{sub 2}O{sub 4}F could be synthesized under high-pressure/high-temperature conditions of 0.9 °GPa and 1450 °Cin a Walker-type multianvil apparatus. The crystal structure represents a new structure type in the class of rare-earth fluoride borates. The structure exhibits a 9+1 coordinated cerium ion, one three-fold coordinated fluoride ion and a one-dimensional chain of [BO{sub 3}]{sup 3−} groups. A closer view on the ac-plane shows an interesting wave-like modulation of the borate chains. Highlights: • CeB{sub 2}O{sub 4}F is the first fluoride borate exclusively built up of one-dimensional, infinite chains of condensed trigonal-planar [BO{sub 3}]{sup 3−} groups. • CeB{sub 2}O{sub 4}F is the first cerium fluoride borate. • High-pressure conditions were necessary to synthesize CeB{sub 2}O{sub 4}F.« less
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
NASA Astrophysics Data System (ADS)
Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.
2018-04-01
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
NASA Astrophysics Data System (ADS)
Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.
1992-09-01
In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.
Determination of the temperature field of shell structures
NASA Astrophysics Data System (ADS)
Rodionov, N. G.
1986-10-01
A stationary heat conduction problem is formulated for the case of shell structures, such as those found in gas-turbine and jet engines. A two-dimensional elliptic differential equation of stationary heat conduction is obtained which allows, in an approximate manner, for temperature changes along a third variable, i.e., the shell thickness. The two-dimensional problem is reduced to a series of one-dimensional problems which are then solved using efficient difference schemes. The approach proposed here is illustrated by a specific example.
Yiannoutsos, Constantin T.; Nakas, Christos T.; Navia, Bradford A.
2013-01-01
We present the multi-dimensional Receiver Operating Characteristic (ROC) surface, a plot of the true classification rates of tests based on levels of biological markers, for multi-group discrimination, as an extension of the ROC curve, commonly used in two-group diagnostic testing. The volume under this surface (VUS) is a global accuracy measure of a test to classify subjects in multiple groups and useful to detect trends in marker measurements. We used three-dimensional ROC surfaces, and associated VUS, to discriminate between HIV-negative (NEG), HIV-positive neurologically asymptomatic (NAS) subjects and patients with AIDS demential complex (ADC), using brain metabolites measured by proton MRS. These were ratios of markers of inflammation, Choline (Cho) and myoinositol (MI), and brain injury, N-acetyl aspartate (NAA), divided by Creatine (Cr), measured in the basal ganglia and the frontal white matter. Statistically significant trends were observed in the three groups with respect to MI/Cr (VUS=0.43; 95% confidence interval (CI) 0.33-0.53), Cho/Cr (0.36; 0.27-0.45) in the basal ganglia and NAA/Cr in the frontal white matter (FWM) (0.29; 0.20-0.38), suggesting a continuum of injury during the neurologically asymptomatic stage of HIV infection, particularly with respect to brain inflammation. Adjusting for age increased the combined classification accuracy of age and NAA/Cr (p=0.053). Pairwise comparisons suggested that neuronal damage associated with NAA/Cr decreases was mainly observed in individuals with ADC, raising issues of synergism between HIV infection and age and possible acceleration of neurological deterioration in an aging HIV-positive population. The three-dimensional ROC surface and its associated VUS are useful for assessing marker accuracy, detecting data trends and offering insight in disease processes affecting multiple groups. PMID:18191586
Thermally induced rarefied gas flow in a three-dimensional enclosure with square cross-section
NASA Astrophysics Data System (ADS)
Zhu, Lianhua; Yang, Xiaofan; Guo, Zhaoli
2017-12-01
Rarefied gas flow in a three-dimensional enclosure induced by nonuniform temperature distribution is numerically investigated. The enclosure has a square channel-like geometry with alternatively heated closed ends and lateral walls with a linear temperature distribution. A recently proposed implicit discrete velocity method with a memory reduction technique is used to numerically simulate the problem based on the nonlinear Shakhov kinetic equation. The Knudsen number dependencies of the vortices pattern, slip velocity at the planar walls and edges, and heat transfer are investigated. The influences of the temperature ratio imposed at the ends of the enclosure and the geometric aspect ratio are also evaluated. The overall flow pattern shows similarities with those observed in two-dimensional configurations in literature. However, features due to the three-dimensionality are observed with vortices that are not identified in previous studies on similar two-dimensional enclosures at high Knudsen and small aspect ratios.
Critical examination of quantum oscillations in SmB6
NASA Astrophysics Data System (ADS)
Riseborough, Peter S.; Fisk, Z.
2017-11-01
We critically review the results of magnetic torque measurements on SmB6 that show quantum oscillations. Similar studies have been given two different interpretations. One interpretation is based on the existence of metallic surface states, while the second interpretation is in terms of a three-dimensional Fermi surface involving neutral fermionic excitations. We suggest that the low-field oscillations that are seen by both groups for B fields as small as 6 T might be due to metallic surface states. The high-field three-dimensional oscillations are only seen by one group for fields B >18 T. The phenomenon of magnetic breakthrough occurs at high fields and involves the formation of Landau orbits that produces a directional-dependent suppression of Bragg scattering. We argue that the measurements performed under higher-field conditions are fully consistent with expectations based on a three-dimensional semiconducting state with magnetic breakthrough.
Mathematical modeling of heat transfer problems in the permafrost
NASA Astrophysics Data System (ADS)
Gornov, V. F.; Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.
2014-11-01
In this work we present results of numerical simulation of three-dimensional temperature fields in soils for various applied problems: the railway line in the conditions of permafrost for different geometries, the horizontal tunnel underground storage and greenhouses of various designs in the Far North. Mathematical model of the process is described by a nonstationary heat equation with phase transitions of pore water. The numerical realization of the problem is based on the finite element method using a library of scientific computing FEniCS. For numerical calculations we use high-performance computing systems.
Abdelrahman, M; Belramman, A; Salem, R; Patel, B
2018-05-01
To compare the performance of novices in laparoscopic peg transfer and intra-corporeal suturing tasks in two-dimensional (2D), three-dimensional (3D) and ultra-high definition (4K) vision systems. Twenty-four novices were randomly assigned to 2D, 3D and 4K groups, eight in each group. All participants performed the two tasks on a box trainer until reaching proficiency. Their performance was assessed based on completion time, number of errors and number of repetitions using the validated FLS proficiency criteria. Eight candidates in each group completed the training curriculum. The mean performance time (in minutes) for the 2D group was 558.3, which was more than that of the 3D and 4K groups of 316.7 and 310.4 min respectively (P < 0.0001). The mean number of repetitions was lower for the 3D and 4K groups versus the 2D group: 125.9 and 127.4 respectively versus 152.1 (P < 0.0001). The mean number of errors was lower for the 4K group versus the 3D and 2D groups: 1.2 versus 26.1 and 50.2 respectively (P < 0.0001). The 4K vision system improved accuracy in acquiring laparoscopic skills for novices in complex tasks, which was shown in significant reduction in number of errors compared to the 3D and the 2D vision systems. The 3D and the 4K vision systems significantly improved speed and accuracy when compared to the 2D vision system based on shorter performance time, fewer errors and lesser number of repetitions. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model.
Bi, Size; Liang, Xiao; Huang, Ting-Lei
2016-01-01
Word embedding, a lexical vector representation generated via the neural linguistic model (NLM), is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Flowing toward Correct Contributions during Group Problem Solving: A Statistical Discourse Analysis
ERIC Educational Resources Information Center
Chiu, Ming Ming
2008-01-01
Groups that created more correct ideas (correct contributions or CCs) might be more likely to solve a problem, and students' recent actions (micro-time context) might aid CC creation. 80 high school students worked in groups of 4 on an algebra problem. Groups with higher mathematics grades or more CCs were more likely to solve the problem. Dynamic…
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations
Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul
2015-01-01
The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Ying-jun; Jia, Zhen-yuan; Zhang, Jun; Qian, Min
2011-01-01
In working process of huge heavy-load manipulators, such as the free forging machine, hydraulic die-forging press, forging manipulator, heavy grasping manipulator, large displacement manipulator, measurement of six-dimensional heavy force/torque and real-time force feedback of the operation interface are basis to realize coordinate operation control and force compliance control. It is also an effective way to raise the control accuracy and achieve highly efficient manufacturing. Facing to solve dynamic measurement problem on six-dimensional time-varying heavy load in extremely manufacturing process, the novel principle of parallel load sharing on six-dimensional heavy force/torque is put forward. The measuring principle of six-dimensional force sensor is analyzed, and the spatial model is built and decoupled. The load sharing ratios are analyzed and calculated in vertical and horizontal directions. The mapping relationship between six-dimensional heavy force/torque value to be measured and output force value is built. The finite element model of parallel piezoelectric six-dimensional heavy force/torque sensor is set up, and its static characteristics are analyzed by ANSYS software. The main parameters, which affect load sharing ratio, are analyzed. The experiments for load sharing with different diameters of parallel axis are designed. The results show that the six-dimensional heavy force/torque sensor has good linearity. Non-linearity errors are less than 1%. The parallel axis makes good effect of load sharing. The larger the diameter is, the better the load sharing effect is. The results of experiments are in accordance with the FEM analysis. The sensor has advantages of large measuring range, good linearity, high inherent frequency, and high rigidity. It can be widely used in extreme environments for real-time accurate measurement of six-dimensional time-varying huge loads on manipulators.
Two fast approximate wavelet algorithms for image processing, classification, and recognition
NASA Astrophysics Data System (ADS)
Wickerhauser, Mladen V.
1994-07-01
We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.
Numerical applications of the advective-diffusive codes for the inner magnetosphere
NASA Astrophysics Data System (ADS)
Aseev, N. A.; Shprits, Y. Y.; Drozdov, A. Y.; Kellerman, A. C.
2016-11-01
In this study we present analytical solutions for convection and diffusion equations. We gather here the analytical solutions for the one-dimensional convection equation, the two-dimensional convection problem, and the one- and two-dimensional diffusion equations. Using obtained analytical solutions, we test the four-dimensional Versatile Electron Radiation Belt code (the VERB-4D code), which solves the modified Fokker-Planck equation with additional convection terms. The ninth-order upwind numerical scheme for the one-dimensional convection equation shows much more accurate results than the results obtained with the third-order scheme. The universal limiter eliminates unphysical oscillations generated by high-order linear upwind schemes. Decrease in the space step leads to convergence of a numerical solution of the two-dimensional diffusion equation with mixed terms to the analytical solution. We compare the results of the third- and ninth-order schemes applied to magnetospheric convection modeling. The results show significant differences in electron fluxes near geostationary orbit when different numerical schemes are used.
Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.
Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk
2015-01-01
Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.
The Griffiss Institute Summer Faculty Program
2013-05-01
can inherit the advantages of the static approach while overcoming its drawbacks . Our solution is centered on the following: (i) application-layer web...inverted pendulum balancing problem. In these challenging environments we show that our algorithm not only allows NEAT to scale to high-dimensional spaces
NASA Astrophysics Data System (ADS)
Wan, Yuanxin; Sha, Ye; Luo, Shaochuan; Deng, Weijia; Wang, Xiaoliang; Xue, Gi; Zhou, Dongshan
2015-11-01
Tin dioxide (SnO2) is an attractive material for anodes in energy storage devices, because it has four times the theoretical capacity of the prevalent anode material (graphite). The main obstacle hampers SnO2 from practical application is the pulverization problem caused by drastic volume change (∼300%) during lithium-ion insertion or extraction, which would lead to the loss of electrical conductivity, unstable solid-electrolyte interphase (SEI) formation and consequently severe capacity fading in the cycling. Here, we anchored the SnO2 nanocrystals into three dimensional graphene gel network to tackle this problem. As a result of the three dimensional (3-D) architecture, the huge volume change during cycling was tolerated by the large free space in this 3-D construction, resulting in a high capacity of 1090 mAh g-1 even after 200 cycles. What's more, at a higher current density 5 A g-1, a reversible capacity of about 491 mAh g-1 was achieved with this electrode.
Probabilistic classifiers with high-dimensional data
Kim, Kyung In; Simon, Richard
2011-01-01
For medical classification problems, it is often desirable to have a probability associated with each class. Probabilistic classifiers have received relatively little attention for small n large p classification problems despite of their importance in medical decision making. In this paper, we introduce 2 criteria for assessment of probabilistic classifiers: well-calibratedness and refinement and develop corresponding evaluation measures. We evaluated several published high-dimensional probabilistic classifiers and developed 2 extensions of the Bayesian compound covariate classifier. Based on simulation studies and analysis of gene expression microarray data, we found that proper probabilistic classification is more difficult than deterministic classification. It is important to ensure that a probabilistic classifier is well calibrated or at least not “anticonservative” using the methods developed here. We provide this evaluation for several probabilistic classifiers and also evaluate their refinement as a function of sample size under weak and strong signal conditions. We also present a cross-validation method for evaluating the calibration and refinement of any probabilistic classifier on any data set. PMID:21087946
Global analysis of an impulsive delayed Lotka-Volterra competition system
NASA Astrophysics Data System (ADS)
Xia, Yonghui
2011-03-01
In this paper, a retarded impulsive n-species Lotka-Volterra competition system with feedback controls is studied. Some sufficient conditions are obtained to guarantee the global exponential stability and global asymptotic stability of a unique equilibrium for such a high-dimensional biological system. The problem considered in this paper is in many aspects more general and incorporates as special cases various problems which have been extensively studied in the literature. Moreover, applying the obtained results to some special cases, I derive some new criteria which generalize and greatly improve some well known results. A method is proposed to investigate biological systems subjected to the effect of both impulses and delays. The method is based on Banach fixed point theory and matrix's spectral theory as well as Lyapunov function. Moreover, some novel analytic techniques are employed to study GAS and GES. It is believed that the method can be extended to other high-dimensional biological systems and complex neural networks. Finally, two examples show the feasibility of the results.
The Position Control of the Surface Motor with the Poles Distribution of Triangular Lattice
NASA Astrophysics Data System (ADS)
Watada, Masaya; Katsuyama, Norikazu; Ebihara, Daiki
Recently, as for the machine tools or industrial robots, high performance, accuracy, etc. are demanded. Generally, when drive of many degrees of freedom is required in the machine tools or industrial robots, it has realized by using two or more motors. For example, two-dimensional positioning stages such as the X-Y plotter or the X-Y stage are enabling the two-dimensional drive by using each one motor in the direction of x, y. In order to use plural motors, these, however, have problems that equipment becomes large and complicate control system. From such problems, the Surface Motor (SFM) that can drive two directions by only one motor is researched. Authors have proposed SFM that considered wide range movement and the application to a curved surface. In this paper, the characteristics of the micro step drive by the open loop control are showed. Introduction of closed loop control for highly accurate positioning, moreover, is examined. The drive characteristics by each control are compared.
NASA Technical Reports Server (NTRS)
Kennedy, Ronald; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element solution strategy is developed to handle traveling load problems in rolling, moving and rotating structure. The main thrust of this section consists of the development of three-dimensional and shell type moving elements. In conjunction with this work, a compatible three-dimensional contact strategy is also developed. Based on these modeling capabilities, extensive analytical and experimental benchmarking is presented. Such testing includes traveling loads in rotating structure as well as low- and high-speed rolling contact involving standing wave-type response behavior. These point to the excellent modeling capabilities of moving element strategies.
An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.
Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco
2017-04-01
In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.
Modeling change from large-scale high-dimensional spatio-temporal array data
NASA Astrophysics Data System (ADS)
Lu, Meng; Pebesma, Edzer
2014-05-01
The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?
Thermal History and Mantle Dynamics of Venus
NASA Technical Reports Server (NTRS)
Hsui, Albert T.
1997-01-01
One objective of this research proposal is to develop a 3-D thermal history model for Venus. The basis of our study is a finite-element computer model to simulate thermal convection of fluids with highly temperature- and pressure-dependent viscosities in a three-dimensional spherical shell. A three-dimensional model for thermal history studies is necessary for the following reasons. To study planetary thermal evolution, one needs to consider global heat budgets of a planet throughout its evolution history. Hence, three-dimensional models are necessary. This is in contrasts to studies of some local phenomena or local structures where models of lower dimensions may be sufficient. There are different approaches to treat three-dimensional thermal convection problems. Each approach has its own advantages and disadvantages. Therefore, the choice of the various approaches is subjective and dependent on the problem addressed. In our case, we are interested in the effects of viscosities that are highly temperature dependent and that their magnitudes within the computing domain can vary over many orders of magnitude. In order to resolve the rapid change of viscosities, small grid spacings are often necessary. To optimize the amount of computing, variable grids become desirable. Thus, the finite-element numerical approach is chosen for its ability to place grid elements of different sizes over the complete computational domain. For this research proposal, we did not start from scratch and develop the finite element codes from the beginning. Instead, we adopted a finite-element model developed by Baumgardner, a collaborator of this research proposal, for three-dimensional thermal convection with constant viscosity. Over the duration supported by this research proposal, a significant amount of advancements have been accomplished.
Dixon-Gordon, Katherine L; Chapman, Alexander L; Lovasz, Nathalie; Walters, Kris
2011-10-01
Borderline personality disorder (BPD) is associated with poor social problem solving and problems with emotion regulation. In this study, the social problem-solving performance of undergraduates with high (n = 26), mid (n = 32), or low (n = 29) levels of BPD features was assessed with the Social Problem-Solving Inventory-Revised and using the means-ends problem-solving procedure before and after a social rejection stressor. The high-BP group, but not the low-BP group, showed a significant reduction in relevant solutions to social problems and more inappropriate solutions following the negative emotion induction. Increases in self-reported negative emotions during the emotion induction mediated the relationship between BP features and reductions in social problem-solving performance. In addition, the high-BP group demonstrated trait deficits in social problem solving on the Social Problem-Solving Inventory-Revised. These findings suggest that future research must examine social problem solving under differing emotional conditions, and that clinical interventions to improve social problem solving among persons with BP features should focus on responses to emotional contexts.
Doebling, Scott William
2016-10-22
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
A CFD analysis of blade row interactions within a high-speed axial compressor
NASA Astrophysics Data System (ADS)
Richman, Michael Scott
Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.
Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef
2010-01-01
In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system.
CAFE: A New Relativistic MHD Code
NASA Astrophysics Data System (ADS)
Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.
2015-06-01
We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.
Penalized gaussian process regression and classification for high-dimensional nonlinear data.
Yi, G; Shi, J Q; Choi, T
2011-12-01
The model based on Gaussian process (GP) prior and a kernel covariance function can be used to fit nonlinear data with multidimensional covariates. It has been used as a flexible nonparametric approach for curve fitting, classification, clustering, and other statistical problems, and has been widely applied to deal with complex nonlinear systems in many different areas particularly in machine learning. However, it is a challenging problem when the model is used for the large-scale data sets and high-dimensional data, for example, for the meat data discussed in this article that have 100 highly correlated covariates. For such data, it suffers from large variance of parameter estimation and high predictive errors, and numerically, it suffers from unstable computation. In this article, penalized likelihood framework will be applied to the model based on GPs. Different penalties will be investigated, and their ability in application given to suit the characteristics of GP models will be discussed. The asymptotic properties will also be discussed with the relevant proofs. Several applications to real biomechanical and bioinformatics data sets will be reported. © 2011, The International Biometric Society No claim to original US government works.
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Mixing Regimes in a Spatially Confined, Two-Dimensional, Supersonic Shear Layer
1992-07-31
MODEL ................................... 3 THE MODEL PROBLEMS .............................................. 6 THE ONE-DIMENSIONAL PROBLEM...the effects of the numerical diffusion on the spectrum. Guirguis et al.ś and Farouk et al."’ have studied spatially evolving mixing layers for equal...approximations. Physical and Numerical Model General Formulation We solve the time-dependent, two-dimensional, compressible, Navier-Stokes equations for a
Finite-dimensional integrable systems: A collection of research problems
NASA Astrophysics Data System (ADS)
Bolsinov, A. V.; Izosimov, A. M.; Tsonev, D. M.
2017-05-01
This article suggests a series of problems related to various algebraic and geometric aspects of integrability. They reflect some recent developments in the theory of finite-dimensional integrable systems such as bi-Poisson linear algebra, Jordan-Kronecker invariants of finite dimensional Lie algebras, the interplay between singularities of Lagrangian fibrations and compatible Poisson brackets, and new techniques in projective geometry.
Merz, Emily C.; Landry, Susan H.; Johnson, Ursula Y.; Williams, Jeffrey M.; Jung, Kwanghee
2016-01-01
Caregiver responsiveness has been theorized and found to support children’s early executive function (EF) development. This study examined the effects of an intervention that targeted family child care provider responsiveness on children’s EF. Family child care providers were randomly assigned to one of two intervention groups or a control group. An intervention group that received a responsiveness-focused online professional development course and another intervention group that received this online course plus weekly mentoring were collapsed into one group because they did not differ on any of the outcome variables. Children (N = 141) ranged in age from 2.5 to 5 years (mean age = 3.58 years; 52% female). At pretest and posttest, children completed delay inhibition tasks (gift delay-wrap, gift delay-bow) and conflict EF tasks (bear/dragon, dimensional change card sort), and parents reported on the children’s level of attention problems. Although there were no main effects of the intervention on children’s EF, there were significant interactions between intervention status and child age for delay inhibition and attention problems. The youngest children improved in delay inhibition and attention problems if they were in the intervention rather than the control group, whereas older children did not. These results suggest that improving family child care provider responsive behaviors may facilitate the development of certain EF skills in young preschool-age children. PMID:26941476
Thiele, Maja; Detlefsen, Sönke; Sevelsted Møller, Linda; Madsen, Bjørn Stæhr; Fuglsang Hansen, Janne; Fialla, Annette Dam; Trebicka, Jonel; Krag, Aleksander
2016-01-01
Alcohol abuse causes half of all deaths from cirrhosis in the West, but few tools are available for noninvasive diagnosis of alcoholic liver disease. We evaluated 2 elastography techniques for diagnosis of alcoholic fibrosis and cirrhosis; liver biopsy with Ishak score and collagen-proportionate area were used as reference. We performed a prospective study of 199 consecutive patients with ongoing or prior alcohol abuse, but without known liver disease. One group of patients had a high pretest probability of cirrhosis because they were identified at hospital liver clinics (in Southern Denmark). The second, lower-risk group, was recruited from municipal alcohol rehabilitation centers and the Danish national public health portal. All subjects underwent same-day transient elastography (FibroScan), 2-dimensional shear wave elastography (Supersonic Aixplorer), and liver biopsy after an overnight fast. Transient elastography and 2-dimensional shear wave elastography identified subjects in each group with significant fibrosis (Ishak score ≥3) and cirrhosis (Ishak score ≥5) with high accuracy (area under the curve ≥0.92). There was no difference in diagnostic accuracy between techniques. The cutoff values for optimal identification of significant fibrosis by transient elastography and 2-dimensional shear wave elastography were 9.6 kPa and 10.2 kPa, and for cirrhosis 19.7 kPa and 16.4 kPa. Negative predictive values were high for both groups, but the positive predictive value for cirrhosis was >66% in the high-risk group vs approximately 50% in the low-risk group. Evidence of alcohol-induced damage to cholangiocytes, but not ongoing alcohol abuse, affected liver stiffness. The collagen-proportionate area correlated with Ishak grades and accurately identified individuals with significant fibrosis and cirrhosis. In a prospective study of individuals at risk for liver fibrosis due to alcohol consumption, we found elastography to be an excellent tool for diagnosing liver fibrosis and for excluding (ruling out rather than ruling in) cirrhosis. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
Status of a standard for neutron skyshine calculation and measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, R.M.; Wright, R.Q.; Greenborg, J.
1990-01-01
An effort has been under way for several years to prepare a draft standard, ANS-6.6.2, Calculation and Measurement of Direct and Scattered Neutron Radiation from Contained Sources Due to Nuclear Power Operations. At the outset, the work group adopted a three-phase study involving one-dimensional analyses, a measurements program, and multi-dimensional analyses. Of particular interest are the neutron radiation levels associated with dry-fuel storage at reactor sites. The need for dry storage has been investigated for various scenarios of repository and monitored retrievable storage (MRS) facilities availability with the waste stream analysis model. The concern is with long-term integrated, low-level dosesmore » at long distances from a multiplicity of sources. To evaluate the conservatism associated with one-dimensional analyses, the work group has specified a series of simple problems. Sources as a function of fuel exposure were determined for a Westinghouse 17 x 17 pressurized water reactor assembly with the ORIGEN-S module of the SCALE system. The energy degradation of the 35 GWd/ton U sources was determined for two generic designs of dry-fuel storage casks.« less
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Direct solution of the H(1s)-H + long-range interaction problem in momentum space
NASA Astrophysics Data System (ADS)
Koga, Toshikatsu
1985-02-01
Perturbation equations for the H(1s)-H+ long-range interaction are solved directly in momentum space up to the fourth order with respect to the reciprocal of the internuclear distance. As in the hydrogen atom problem, the Fock transformation is used which projects the momentum vector of an electron from the three-dimensional hyperplane onto the four-dimensional hypersphere. Solutions are given as linear combinations of several four-dimensional spherical harmonics. The present results add an example to the momentum-space solution of the nonspherical potential problem.
REVIEWS OF TOPICAL PROBLEMS: Global phase-stable radiointerferometric systems
NASA Astrophysics Data System (ADS)
Dravskikh, A. F.; Korol'kov, Dimitrii V.; Pariĭskiĭ, Yu N.; Stotskiĭ, A. A.; Finkel'steĭn, A. M.; Fridman, P. A.
1981-12-01
We discuss from a unitary standpoint the possibility of building a phase-stable interferometric system with very long baselines that operate around the clock with real-time data processing. The various problems involved in the realization of this idea are discussed: the methods of suppression of instrumental and tropospheric phase fluctuations, the methods for constructing two-dimensional images and determining the coordinates of radio sources with high angular resolution, and the problem of the optimal structure of the interferometric system. We review in detail the scientific problems from the various branches of natural science (astrophysics, cosmology, geophysics, geodynamics, astrometry, etc.) whose solution requires superhigh angular resolution.
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.; ...
2017-04-18
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
On irregular singularity wave functions and superconformal indices
NASA Astrophysics Data System (ADS)
Buican, Matthew; Nishinaka, Takahiro
2017-09-01
We generalize, in a manifestly Weyl-invariant way, our previous expressions for irregular singularity wave functions in two-dimensional SU(2) q-deformed Yang-Mills theory to SU( N). As an application, we give closed-form expressions for the Schur indices of all ( A N - 1 , A N ( n - 1)-1) Argyres-Douglas (AD) superconformal field theories (SCFTs), thus completing the computation of these quantities for the ( A N , A M ) SCFTs. With minimal effort, our wave functions also give new Schur indices of various infinite sets of "Type IV" AD theories. We explore the discrete symmetries of these indices and also show how highly intricate renormalization group (RG) flows from isolated theories and conformal manifolds in the ultraviolet to isolated theories and (products of) conformal manifolds in the infrared are encoded in these indices. We compare our flows with dimensionally reduced flows via a simple "monopole vev RG" formalism. Finally, since our expressions are given in terms of concise Lie algebra data, we speculate on extensions of our results that might be useful for probing the existence of hypothetical SCFTs based on other Lie algebras. We conclude with a discussion of some open problems.
A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Tengfei; Lewis, E. E.; Smith, M. A.
A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less
Symmetry blockade and its breakdown in energy equipartition of square graphene resonators
NASA Astrophysics Data System (ADS)
Wang, Yisen; Zhu, Zhigang; Zhang, Yong; Huang, Liang
2018-03-01
The interaction between flexural modes due to nonlinear potentials is critical to heat conductivity and mechanical vibration of two dimensional materials such as graphene. Much effort has been devoted to understanding the underlying mechanism. In this paper, we examine solely the out-of-plane flexural modes and identify their energy flow pathway during the equipartition process. In particular, the modes are grouped into four classes by their distinct symmetries. The couplings are significantly larger within a class than between classes, forming symmetry blockades. As a result, the energy first flows to the modes in the same symmetry class. Breakdown of the symmetry blockade, i.e., inter-class energy flow, starts when the displacement profile becomes complex and the inter-class couplings bear nonneglectable values. The equipartition time follows the stretched exponential law and survives in the thermodynamic limit. These results bring fundamental understandings to the Fermi-Pasta-Ulam problem in two dimensional systems with complex potentials and reveal clearly the physical picture of dynamical interactions between the flexural modes, which will be crucial to the understanding of their contribution in high thermal conductivity and mechanism of energy dissipation that may intrinsically limit the quality factor of the resonator.
Stability of Dirac Liquids with Strong Coulomb Interaction.
Tupitsyn, Igor S; Prokof'ev, Nikolay V
2017-01-13
We develop and apply the diagrammatic Monte Carlo technique to address the problem of the stability of the Dirac liquid state (in a graphene-type system) against the strong long-range part of the Coulomb interaction. So far, all attempts to deal with this problem in the field-theoretical framework were limited either to perturbative or random phase approximation and functional renormalization group treatments, with diametrically opposite conclusions. Our calculations aim at the approximation-free solution with controlled accuracy by computing vertex corrections from higher-order skeleton diagrams and establishing the renormalization group flow of the effective Coulomb coupling constant. We unambiguously show that with increasing the system size L (up to ln(L)∼40), the coupling constant always flows towards zero; i.e., the two-dimensional Dirac liquid is an asymptotically free T=0 state with divergent Fermi velocity.
On the theory of oscillating airfoils of finite span in subsonic compressible flow
NASA Technical Reports Server (NTRS)
Reissner, Eric
1950-01-01
The problem of oscillating lifting surface of finite span in subsonic compressible flow is reduced to an integral equation. The kernel of the integral equation is approximated by a simpler expression, on the basis of the assumption of sufficiently large aspect ratio. With this approximation the double integral occurring in the formulation of the problem is reduced to two single integrals, one of which is taken over the chord and the other over the span of the lifting surface. On the basis of this reduction the three-dimensional problem appears separated into two two-dimensional problems, one of them being effectively the problem of two-dimensional flow and the other being the problem of spanwise circulation distribution. Earlier results concerning the oscillating lifting surface of finite span in incompressible flow are contained in the present more general results.
Extension of modified power method to two-dimensional problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan 44919; Lee, Hyunsuk
2016-09-01
In this study, the generalized modified power method was extended to two-dimensional problems. A direct application of the method to two-dimensional problems was shown to be unstable when the number of requested eigenmodes is larger than a certain problem dependent number. The root cause of this instability has been identified as the degeneracy of the transfer matrix. In order to resolve this instability, the number of sub-regions for the transfer matrix was increased to be larger than the number of requested eigenmodes; and a new transfer matrix was introduced accordingly which can be calculated by the least square method. Themore » stability of the new method has been successfully demonstrated with a neutron diffusion eigenvalue problem and the 2D C5G7 benchmark problem. - Graphical abstract:.« less
Large-angle slewing maneuvers for flexible spacecraft
NASA Technical Reports Server (NTRS)
Chun, Hon M.; Turner, James D.
1988-01-01
A new class of closed-form solutions for finite-time linear-quadratic optimal control problems is presented. The solutions involve Potter's solution for the differential matrix Riccati equation, which assumes the form of a steady-state plus transient term. Illustrative examples are presented which show that the new solutions are more computationally efficient than alternative solutions based on the state transition matrix. As an application of the closed-form solutions, the neighboring extremal path problem is presented for a spacecraft retargeting maneuver where a perturbed plant with off-nominal boundary conditions now follows a neighboring optimal trajectory. The perturbation feedback approach is further applied to three-dimensional slewing maneuvers of large flexible spacecraft. For this problem, the nominal solution is the optimal three-dimensional rigid body slew. The perturbation feedback then limits the deviations from this nominal solution due to the flexible body effects. The use of frequency shaping in both the nominal and perturbation feedback formulations reduces the excitation of high-frequency unmodeled modes. A modified Kalman filter is presented for estimating the plant states.
Artificial intelligence in robot control systems
NASA Astrophysics Data System (ADS)
Korikov, A.
2018-05-01
This paper analyzes modern concepts of artificial intelligence and known definitions of the term "level of intelligence". In robotics artificial intelligence system is defined as a system that works intelligently and optimally. The author proposes to use optimization methods for the design of intelligent robot control systems. The article provides the formalization of problems of robotic control system design, as a class of extremum problems with constraints. Solving these problems is rather complicated due to the high dimensionality, polymodality and a priori uncertainty. Decomposition of the extremum problems according to the method, suggested by the author, allows reducing them into a sequence of simpler problems, that can be successfully solved by modern computing technology. Several possible approaches to solving such problems are considered in the article.
NASA Astrophysics Data System (ADS)
Jeon, Kyungmoon; Huffman, Douglas; Noh, Taehee
2005-10-01
This study investigated the effects of a thinking aloud pair problem solving (TAPPS) approach on students' chemistry problem-solving performance and verbal interactions. A total of 85 eleventh grade students from three classes in a Korean high school were randomly assigned to one of three groups; either individually using a problem-solving strategy, using a problem-solving strategy with TAPPS, or the control group. After instruction, students' problem-solving performance was examined. The results showed that students in both the individual and TAPPS groups performed better than those in the control group on recalling the related law and mathematical execution, while students in the TAPPS group performed better than those in the other groups on conceptual knowledge. To investigate the verbal behaviors using TAPPS, verbal behaviors of solvers and listeners were classified into 8 categories. Listeners' verbal behavior of "agreeing" and "pointing out", and solvers' verbal behavior of "modifying" were positively related with listeners' problem-solving performance. There was, however, a negative correlation between listeners' use of "point out" and solvers' problem-solving performance. The educational implications of this study are discussed.
Time-delayed feedback technique for suppressing instabilities in time-periodic flow
NASA Astrophysics Data System (ADS)
Shaabani-Ardali, Léopold; Sipp, Denis; Lesshafft, Lutz
2017-11-01
A numerical method is presented that allows to compute time-periodic flow states, even in the presence of hydrodynamic instabilities. The method is based on filtering nonharmonic components by way of delayed feedback control, as introduced by Pyragas [Phys. Lett. A 170, 421 (1992), 10.1016/0375-9601(92)90745-8]. Its use in flow problems is demonstrated here for the case of a periodically forced laminar jet, subject to a subharmonic instability that gives rise to vortex pairing. The optimal choice of the filter gain, which is a free parameter in the stabilization procedure, is investigated in the context of a low-dimensional model problem, and it is shown that this model predicts well the filter performance in the high-dimensional flow system. Vortex pairing in the jet is efficiently suppressed, so that the unstable periodic flow state in response to harmonic forcing is accurately retrieved. The procedure is straightforward to implement inside any standard flow solver. Memory requirements for the delayed feedback control can be significantly reduced by means of time interpolation between checkpoints. Finally, the method is extended for the treatment of periodic problems where the frequency is not known a priori. This procedure is demonstrated for a three-dimensional cubic lid-driven cavity in supercritical conditions.
A novel framework to alleviate the sparsity problem in context-aware recommender systems
NASA Astrophysics Data System (ADS)
Yu, Penghua; Lin, Lanfen; Wang, Jing
2017-04-01
Recommender systems have become indispensable for services in the era of big data. To improve accuracy and satisfaction, context-aware recommender systems (CARSs) attempt to incorporate contextual information into recommendations. Typically, valid and influential contexts are determined in advance by domain experts or feature selection approaches. Most studies have focused on utilizing the unitary context due to the differences between various contexts. Meanwhile, multi-dimensional contexts will aggravate the sparsity problem, which means that the user preference matrix would become extremely sparse. Consequently, there are not enough or even no preferences in most multi-dimensional conditions. In this paper, we propose a novel framework to alleviate the sparsity issue for CARSs, especially when multi-dimensional contextual variables are adopted. Motivated by the intuition that the overall preferences tend to show similarities among specific groups of users and conditions, we first explore to construct one contextual profile for each contextual condition. In order to further identify those user and context subgroups automatically and simultaneously, we apply a co-clustering algorithm. Furthermore, we expand user preferences in a given contextual condition with the identified user and context clusters. Finally, we perform recommendations based on expanded preferences. Extensive experiments demonstrate the effectiveness of the proposed framework.
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
External Boundary Conditions for Three-Dimensional Problems of Computational Aerodynamics
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon V.
1997-01-01
We consider an unbounded steady-state flow of viscous fluid over a three-dimensional finite body or configuration of bodies. For the purpose of solving this flow problem numerically, we discretize the governing equations (Navier-Stokes) on a finite-difference grid. The grid obviously cannot stretch from the body up to infinity, because the number of the discrete variables in that case would not be finite. Therefore, prior to the discretization we truncate the original unbounded flow domain by introducing some artificial computational boundary at a finite distance of the body. Typically, the artificial boundary is introduced in a natural way as the external boundary of the domain covered by the grid. The flow problem formulated only on the finite computational domain rather than on the original infinite domain is clearly subdefinite unless some artificial boundary conditions (ABC's) are specified at the external computational boundary. Similarly, the discretized flow problem is subdefinite (i.e., lacks equations with respect to unknowns) unless a special closing procedure is implemented at this artificial boundary. The closing procedure in the discrete case is called the ABC's as well. In this paper, we present an innovative approach to constructing highly accurate ABC's for three-dimensional flow computations. The approach extends our previous technique developed for the two-dimensional case; it employs the finite-difference counterparts to Calderon's pseudodifferential boundary projections calculated in the framework of the difference potentials method (DPM) by Ryaben'kii. The resulting ABC's appear spatially nonlocal but particularly easy to implement along with the existing solvers. The new boundary conditions have been successfully combined with the NASA-developed production code TLNS3D and used for the analysis of wing-shaped configurations in subsonic (including incompressible limit) and transonic flow regimes. As demonstrated by the computational experiments and comparisons with the standard (local) methods, the DPM-based ABC's allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable increase of the convergence rate of multigrid iterations.
Zheng, X; Xue, Q; Mittal, R; Beilamowicz, S
2010-11-01
A new flow-structure interaction method is presented, which couples a sharp-interface immersed boundary method flow solver with a finite-element method based solid dynamics solver. The coupled method provides robust and high-fidelity solution for complex flow-structure interaction (FSI) problems such as those involving three-dimensional flow and viscoelastic solids. The FSI solver is used to simulate flow-induced vibrations of the vocal folds during phonation. Both two- and three-dimensional models have been examined and qualitative, as well as quantitative comparisons, have been made with established results in order to validate the solver. The solver is used to study the onset of phonation in a two-dimensional laryngeal model and the dynamics of the glottal jet in a three-dimensional model and results from these studies are also presented.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Harris, James Austin; Hix, William Raphael
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less
ERIC Educational Resources Information Center
Chiu, Ming Ming
2008-01-01
The micro-time context of group processes (such as argumentation) can affect a group's micro-creativity (new ideas). Eighty high school students worked in groups of four on an algebra problem. Groups with higher mathematics grades showed greater micro-creativity, and both were linked to better problem solving outcomes. Dynamic multilevel analyses…
Interior radiances in optically deep absorbing media. I - Exact solutions for one-dimensional model.
NASA Technical Reports Server (NTRS)
Kattawar, G. W.; Plass, G. N.
1973-01-01
An exact analytic solution to the one-dimensional scattering problem with arbitrary single scattering albedo and arbitrary surface albedo is presented. Expressions are given for the emergent flux from a homogeneous layer, the internal flux within the layer, and the radiative heating. A comparison of these results with the values calculated from the matrix operator theory indicates an exceedingly high accuracy. A detailed study is made of the error in the matrix operator results and its dependence on the accuracy of the starting value.
Two-dimensional Anderson-Hubbard model in the DMFT + {Sigma} approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchinskii, E. Z., E-mail: kuchinsk@iep.uran.ru; Kuleeva, N. A.; Nekrasov, I. A.
The density of states, the dynamic (optical) conductivity, and the phase diagram of the paramagnetic two-dimensional Anderson-Hubbard model with strong correlations and disorder are analyzed within the generalized dynamical mean field theory (DMFT + {Sigma} approximation). Strong correlations are accounted by the DMFT, while disorder is taken into account via the appropriate generalization of the self-consistent theory of localization. We consider the two-dimensional system with the rectangular 'bare' density of states (DOS). The DMFT effective single-impurity problem is solved by numerical renormalization group (NRG). The 'correlated metal,' Mott insulator, and correlated Anderson insulator phases are identified from the evolution ofmore » the density of states, optical conductivity, and localization length, demonstrating both Mott-Hubbard and Anderson metal-insulator transitions in two-dimensional systems of finite size, allowing us to construct the complete zero-temperature phase diagram of the paramagnetic Anderson-Hubbard model. The localization length in our approximation is practically independent of the strength of Hubbard correlations. But the divergence of the localization length in a finite-size two-dimensional system at small disorder signifies the existence of an effective Anderson transition.« less
Wang, Gang; Xiong, Xunhui; Lin, Zhihua; Zheng, Jie; Fenghua, Zheng; Li, Youpeng; Liu, Yanzhen; Yang, Chenghao; Tang, Yiwei; Liu, Meilin
2018-05-31
Lithium metal anodes are considered to be the most promising anode material for next-generation advanced energy storage devices due to their high reversible capacity and extremely low anode potential. Nevertheless, the formation of dendritic Li, induced by the repeated breaking and repairing of solid electrolyte interphase layers, always causes poor cycling performance and low coulombic efficiency, as well as serious safety problems, which have hindered the practical application of Li anodes for a long time. Herein, we design an electrode by covering a polyvinyl alcohol layer with a three-dimensional nanofiber network structure through an electrospinning technique. The polar functional groups on the surface of the polymer nanofibers can restrict the deposition of Li along the fibers and regulate the deposition of Li uniformly in the voids between the nanofibers. Owing to the structural features of the polymer, the modified Li|Cu electrode displays excellent cycle stability, with a high coulombic efficiency of 98.6% after 200 cycles at a current density of 1 mA cm-2 under a deposition capacity of 1 mA h cm-2, whilst the symmetric cell using the polymer modified Li anode shows stable cycling with a low hysteresis voltage of ∼80 mV over 600 h at a current density of 5 mA cm-2.
Compton imaging tomography technique for NDE of large nonuniform structures
NASA Astrophysics Data System (ADS)
Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz
2011-09-01
In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.
MCDU-8-A Computer Code for One-Dimensional Blast Wave Problems
1975-07-01
medium surrounding the explosion is assuned to be air obeying an ideal gas equation of state with a constant specific heat ratio, y2, of 1.4. The...characteristics Explosive blast Pentolite spheres ■ 20.\\ASSTRACT (Continue on reverie eld* II neceeemry end Identify by block number) he method...INVOLVING THE. SUDDEN RELEASE OF A HIGHLY COMPRESSED AIR SPHERE 11 V. A SAMPLE PROBLEM INVOLVING A BLAST WAVE RESULTING FROM THE DETONATION OF A
Mai, Kien T; Ball, Christopher G; Kos, Zuzana; Belanger, Eric C; Islam, Shahidul; Sekhon, Harman
2014-07-01
Cystoscopic urine obtained before the resection of low-grade urothelial carcinoma (LGUC), with adequate cytological sampling of the tumor, frequently revealed the presence of three-dimensional cell groups with disordered nuclei and cellular discohesion (3DDD). 936 cystoscopic urine specimens were categorized into five groups: Group 1 (80 specimens) with biopsy-proven LGUC within 6 months of cytologic examination, Group 2 (23 specimens) with biopsy proven LGUC within 6 to 36 months of cytologic examination, Group 3 (527 specimens) with a history of LGUC but no tumor for a period of greater than 3 years, Group 4 (300 specimens) with no association with LGUC, and Group 5 (6 specimens) with urinary lithiasis. Specimens with scant cellularity accounted for 20% of those in Group 1. For 3DDD in detecting LGUC in adequate cystoscopic urine, the sensitivity was 70%, specificity was 94%. Two- or three-dimensional cell groups with ordered nuclei and/or cellular non-discohesion were often seen in specimens from Groups 4 or 5. The 3DDD was present in a significant number of cases with concurrent negative cystoscopic findings but also positive LGUC in ensuing follow-up. In these cases, 3DDD with or without tumor identified at concurrent cystoscopy were found to be morphologically similar. Furthermore, the presence of 3DDD in 8% of Group 3 likely represents urothelial dysplasia that is not cystoscopically detectable. The high specificity and sensitivity of 3DDD is demonstrated. These findings are consistent with the decreased cell adhesion and disordered nuclear arrangement of low grade urothelial neoplasia. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Teodorovich, E. V.
2018-03-01
In order to find the shape of energy spectrum within the framework of the model of stationary homogeneous isotropic turbulence, the renormalization-group equations, which reflect the Markovian nature of the mechanism of energy transfer along the wavenumber spectrum, are used in addition to the dimensional considerations and the energy balance equation. For the spectrum, the formula depends on three parameters, namely, the wavenumber, which determines the upper boundary of the range of the turbulent energy production, the spectral flux through this boundary, and the fluid kinematic viscosity.
Toda-Lattice Solitons in α-Helical Proteins
NASA Astrophysics Data System (ADS)
Yomosa, Shigeo
1984-10-01
We propose a theory of Toda-lattice soliton in α-helical proteins which enables us to elucidate the molecular dynamics of muscle contraction. One-dimensional chain of peptide groups jointed together by H-bonds, which stabilizes α-helical structure of proteins, can be regarded as a Toda-lattice where the potential of H-bonding interaction between peptide groups has a remarkable nonlinearity. By using the results of theoretical studies for Toda-lattice soliton and for the initial value problem, we can describe the molecular mechanism of the transformation of the chemical energy to the mechanical work in the process of the muscle contraction.
Fang, Chen; Li, Chunfei; Cabrerizo, Mercedes; Barreto, Armando; Andrian, Jean; Rishe, Naphtali; Loewenstein, David; Duara, Ranjan; Adjouadi, Malek
2018-04-12
Over the past few years, several approaches have been proposed to assist in the early diagnosis of Alzheimer's disease (AD) and its prodromal stage of mild cognitive impairment (MCI). Using multimodal biomarkers for this high-dimensional classification problem, the widely used algorithms include Support Vector Machines (SVM), Sparse Representation-based classification (SRC), Deep Belief Networks (DBN) and Random Forest (RF). These widely used algorithms continue to yield unsatisfactory performance for delineating the MCI participants from the cognitively normal control (CN) group. A novel Gaussian discriminant analysis-based algorithm is thus introduced to achieve a more effective and accurate classification performance than the aforementioned state-of-the-art algorithms. This study makes use of magnetic resonance imaging (MRI) data uniquely as input to two separate high-dimensional decision spaces that reflect the structural measures of the two brain hemispheres. The data used include 190 CN, 305 MCI and 133 AD subjects as part of the AD Big Data DREAM Challenge #1. Using 80% data for a 10-fold cross-validation, the proposed algorithm achieved an average F1 score of 95.89% and an accuracy of 96.54% for discriminating AD from CN; and more importantly, an average F1 score of 92.08% and an accuracy of 90.26% for discriminating MCI from CN. Then, a true test was implemented on the remaining 20% held-out test data. For discriminating MCI from CN, an accuracy of 80.61%, a sensitivity of 81.97% and a specificity of 78.38% were obtained. These results show significant improvement over existing algorithms for discriminating the subtle differences between MCI participants and the CN group.
NASA Technical Reports Server (NTRS)
Chung, T. J. (Editor); Karr, Gerald R. (Editor)
1989-01-01
Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.
Cosmology and the large-mass problem of the five-dimensional Kaluza-Klein theory
NASA Astrophysics Data System (ADS)
Lukács, B.; Pacher, T.
1985-12-01
It is shown that in five-dimensional Kaluza-Klein theories the large-mass problem leads to circulus vitiosus: the huge recent e2/G value produces the large mass problem, which restricts the ratio e2/Gm2 to the order of unity, in contradiction with the present 1040 value for elementary particles.
Femtosecond Pulse Characterization as Applied to One-Dimensional Photonic Band Edge Structures
NASA Technical Reports Server (NTRS)
Fork, Richard L.; Gamble, Lisa J.; Diffey, William M.
1999-01-01
The ability to control the group velocity and phase of an optical pulse is important to many current active areas of research. Electronically addressable one-dimensional photonic crystals are an attractive candidate to achieve this control. This report details work done toward the characterization of photonic crystals and improvement of the characterization technique. As part of the work, the spectral dependence of the group delay imparted by a GaAs/AlAs photonic crystal was characterized. Also, a first generation an electrically addressable photonic crystal was tested for the ability to electronically control the group delay. The measurement technique, using 100 femtosecond continuum pulses was improved to yield high spectral resolution (1.7 nanometers) and concurrently with high temporal resolution (tens of femtoseconds). Conclusions and recommendations based upon the work done are also presented.
Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1992-01-01
The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.
Teaching Point-Group Symmetry with Three-Dimensional Models
ERIC Educational Resources Information Center
Flint, Edward B.
2011-01-01
Three tools for teaching symmetry in the context of an upper-level undergraduate or introductory graduate course on the chemical applications of group theory are presented. The first is a collection of objects that have the symmetries of all the low-symmetry and high-symmetry point groups and the point groups with rotational symmetries from 2-fold…
A fast numerical method for the valuation of American lookback put options
NASA Astrophysics Data System (ADS)
Song, Haiming; Zhang, Qi; Zhang, Ran
2015-10-01
A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.
Ferrando, Albert; Zacarés, Mario; García-March, Miguel-Angel; Monsoriu, Juan A; de Córdoba, Pedro Fernández
2005-09-16
Using group theory arguments and numerical simulations, we demonstrate the possibility of changing the vorticity or topological charge of an individual vortex by means of the action of a system possessing a discrete rotational symmetry of finite order. We establish on theoretical grounds a "transmutation pass" determining the conditions for this phenomenon to occur and numerically analyze it in the context of two-dimensional optical lattices. An analogous approach is applicable to the problems of Bose-Einstein condensates in periodic potentials.
Energy Decay and Boundary Control for Distributed Parameter Systems with Viscoelastic Damping
1989-07-24
for the evolution variational 2 inequality for a rigid-visco-plastic (Bingham) fluid [4, 11-13J. Other work involved numerical and theoretical analysis...dimensional parabolic -elliptic interface problem, submitted to Quart. Appl. Math. 16. W. Desch and R. K. Miller, Exponential Stabilization of Volterra ...17 16. SUPPLEMENTARY NOTATION 17, COSATI CODES 18. SUBJECT TERMS (Continue an revemu of necessay and kdeftOi by block number) FIELD IGROUP ISUB-GROUP
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores
NASA Astrophysics Data System (ADS)
Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David
2017-10-01
Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.
Wave Phenomena in an Acoustic Resonant Chamber
ERIC Educational Resources Information Center
Smith, Mary E.; And Others
1974-01-01
Discusses the design and operation of a high Q acoustical resonant chamber which can be used to demonstrate wave phenomena such as three-dimensional normal modes, Q values, densities of states, changes in the speed of sound, Fourier decomposition, damped harmonic oscillations, sound-absorbing properties, and perturbation and scattering problems.…
Stanford automatic photogrammetry research
NASA Technical Reports Server (NTRS)
Quam, L. H.; Hannah, M. J.
1974-01-01
A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.
ERIC Educational Resources Information Center
Dunn, Tracie
2009-01-01
High-school students often tie the definition of art to a two-dimensional surface, obstructing possible solutions to visual problem-solving and restricting creative thinking. In this article, the author describes a project that inspired students to view arts as a social event: installation art. From a contemporary point of view, installation art…
Big Data Goes Personal: Privacy and Social Challenges
ERIC Educational Resources Information Center
Bonomi, Luca
2015-01-01
The Big Data phenomenon is posing new challenges in our modern society. In addition to requiring information systems to effectively manage high-dimensional and complex data, the privacy and social implications associated with the data collection, data analytics, and service requirements create new important research problems. First, the high…
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.
Understanding 3D human torso shape via manifold clustering
NASA Astrophysics Data System (ADS)
Li, Sheng; Li, Peng; Fu, Yun
2013-05-01
Discovering the variations in human torso shape plays a key role in many design-oriented applications, such as suit designing. With recent advances in 3D surface imaging technologies, people can obtain 3D human torso data that provide more information than traditional measurements. However, how to find different human shapes from 3D torso data is still an open problem. In this paper, we propose to use spectral clustering approach on torso manifold to address this problem. We first represent high-dimensional torso data in a low-dimensional space using manifold learning algorithm. Then the spectral clustering method is performed to get several disjoint clusters. Experimental results show that the clusters discovered by our approach can describe the discrepancies in both genders and human shapes, and our approach achieves better performance than the compared clustering method.
Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery
NASA Astrophysics Data System (ADS)
Prasad, S.; Yu, Z.
The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.
Caprioglio, Alberto; Siani, Lea; Caprioglio, Claudia
2007-01-01
The permanent maxillary canine has a high incidence of impaction. In the clinical treatment of impaction, the first problem is diagnosis and localization. The new diagnostic 3-dimensional systems shown in this article provide valid support in understanding anatomic connections and planning the movements needed for orthodontic correction. Thus, the clinician can reduce the incidence of iatrogenic damage of adjacent structures. This article reviews several biomedical systems for guided eruption of palatally impacted canines and discusses a new device for guided eruption of the surgically disimpacted tooth. This device, called Easy Cuspid, is designed to reduce recognized problems with reaction forces through a simple method. A clinical case of bilateral impaction of the permanent maxillary canines shows the application of the diagnostic method and the biomechanical system, Easy Cuspid.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi
2015-08-01
Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems
NASA Astrophysics Data System (ADS)
Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix
2018-03-01
We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.
Photonics for aerospace sensors
NASA Astrophysics Data System (ADS)
Pellegrino, John; Adler, Eric D.; Filipov, Andree N.; Harrison, Lorna J.; van der Gracht, Joseph; Smith, Dale J.; Tayag, Tristan J.; Viveiros, Edward A.
1992-11-01
The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.
Bloch, Edward; Uddin, Nabil; Gannon, Laura; Rantell, Khadija; Jain, Saurabh
2015-01-01
Background Stereopsis is believed to be advantageous for surgical tasks that require precise hand-eye coordination. We investigated the effects of short-term and long-term absence of stereopsis on motor task performance in three-dimensional (3D) and two-dimensional (2D) viewing conditions. Methods 30 participants with normal stereopsis and 15 participants with absent stereopsis performed a simulated surgical task both in free space under direct vision (3D) and via a monitor (2D), with both eyes open and one eye covered in each condition. Results The stereo-normal group scored higher, on average, than the stereo-absent group with both eyes open under direct vision (p<0.001). Both groups performed comparably in monocular and binocular monitor viewing conditions (p=0.579). Conclusions High-grade stereopsis confers an advantage when performing a fine motor task under direct vision. However, stereopsis does not appear advantageous to task performance under 2D viewing conditions, such as in video-assisted surgery. PMID:25185439
State-of-charge estimation in lithium-ion batteries: A particle filter approach
NASA Astrophysics Data System (ADS)
Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.
2016-11-01
The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.
Djordjevic, Ivan B
2011-08-15
In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America
Applications of an exponential finite difference technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.; Keith, T.G. Jr.
1988-07-01
An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Optical Properties and Wave Propagation in Semiconductor-Based Two-Dimensional Photonic Crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agio, Mario
2002-12-31
This work is a theoretical investigation on the physical properties of semiconductor-based two-dimensional photonic crystals, in particular for what concerns systems embedded in planar dielectric waveguides (GaAs/AlGaAs, GaInAsP/InP heterostructures, and self-standing membranes) or based on macro-porous silicon. The photonic-band structure of photonic crystals and photonic-crystal slabs is numerically computed and the associated light-line problem is discussed, which points to the issue of intrinsic out-of-lane diffraction losses for the photonic bands lying above the light line. The photonic states are then classified by the group theory formalism: each mode is related to an irreducible representation of the corresponding small point group.more » The optical properties are investigated by means of the scattering matrix method, which numerically implements a variable-angle-reflectance experiment; comparison with experiments is also provided. The analysis of surface reflectance proves the existence of selection rules for coupling an external wave to a certain photonic mode. Such rules can be directly derived from symmetry considerations. Lastly, the control of wave propagation in weak-index contrast photonic-crystal slabs is tackled in view of designing building blocks for photonic integrated circuits. The proposed designs are found to comply with the major requirements of low-loss propagation, high and single-mode transmission. These notions are then collected to model a photonic-crystal combiner for an integrated multi-wavelength-source laser.« less
WFIRST: Microlensing Analysis Data Challenge
NASA Astrophysics Data System (ADS)
Street, Rachel; WFIRST Microlensing Science Investigation Team
2018-01-01
WFIRST will produce thousands of high cadence, high photometric precision lightcurves of microlensing events, from which a wealth of planetary and stellar systems will be discovered. However, the analysis of such lightcurves has historically been very time consuming and expensive in both labor and computing facilities. This poses a potential bottleneck to deriving the full science potential of the WFIRST mission. To address this problem, the WFIRST Microlensing Science Investigation Team designing a series of data challenges to stimulate research to address outstanding problems of microlensing analysis. These range from the classification and modeling of triple lens events to methods to efficiently yet thoroughly search a high-dimensional parameter space for the best fitting models.
High frequency vibration analysis by the complex envelope vectorization.
Giannini, O; Carcaterra, A; Sestieri, A
2007-06-01
The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
NASA Astrophysics Data System (ADS)
Bodin, Jacques
2015-03-01
In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.
NASA Astrophysics Data System (ADS)
Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.
1986-12-01
A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.
Zhong, Lei; Yang, Kai; Guan, Ruiteng; Wang, Liangbin; Wang, Shuanjin; Han, Dongmei; Xiao, Min; Meng, Yuezhong
2017-12-20
Rechargeable lithium-sulfur (Li-S) batteries have been expected for new-generation electrical energy storages, which are attributed to their high theoretical energy density, cost effectiveness, and eco-friendliness. But Li-S batteries still have some problems for practical application, such as low sulfur utilization and dissatisfactory capacity retention. Herein, we designed and fabricated a foldable and compositionally heterogeneous three-dimensional sulfur cathode with integrated sandwich structure. The electrical conductivity of the cathode is facilitated by three different dimension carbons, in which short-distance and long-distance pathways for electrons are provided by zero-dimensional ketjen black (KB), one-dimensional activated carbon fiber (ACF) and two-dimensional graphene (G). The resultant three-dimensional sulfur cathode (T-AKG/KB@S) with an areal sulfur loading of 2 mg cm -2 exhibits a high initial specific capacity, superior rate performance and a reversible discharge capacity of up to 726 mAh g -1 at 3.6 mA cm -2 with an inappreciable capacity fading rate of 0.0044% per cycle after 500 cycles. Moreover, the cathode with a high areal sulfur loading of 8 mg cm -2 also delivers a reversible discharge capacity of 938 mAh g -1 at 0.71 mA cm -2 with a capacity fading rate of 0.15% per cycle and a Coulombic efficiency of almost 100% after 50 cycles.
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Shen, Jiaqi; Zhou, Qiao; Liu, Yue; Luo, Runlan; Tan, Bijun; Li, Guangsen
2016-08-23
Iron-deficiency anemia (IDA) is a global health problem and a common medical condition that can be seen in everyday clinical practice. And two-dimensional speckle tracking echocardiography (2D-STE) has been reported very useful in evaluating left atrial (LA) function, as well as left ventricular (LV) function. The aim of our study is to evaluate the LA function in patients with IDA by 2D-STE. 65 patients with IDA were selected. This group of patients was then divided into two groups according to the degree of hemoglobin: group B (Hb > 90 g/L) and group C (Hb60 ~ 90 g/L). Another 30 healthy people were also selected as control group A. Conventional echocardiography parameters, such as left atrial diameter (LAD), peak E and A of mitralis (E, A), E/A, end-diastolic thickness of ventricular septum (IVST d), end-diastolic thickness of LV posterior wall (PWTd) and left ventricular end-diastolic dimension (LVDd) were obtained from these three groups. Left atrial minimum volume (LAVmin), left atrial pre-atrial contraction volume (LAVp) and left atrial maximum volume (LAVmax) were measured by Simpson's rule, whereas left atrial active ejection fraction (LAAEF) and left atrial passive ejection fraction (LAPEF) were obtained from calculation. Two-dimensional images were acquired from apical four-chamber view and two-chamber view to store images for offline analysis. The global peak atrial longitudinal strain and strain rate of systolic LV (GLSs, GLSRs) as well as early and late diastolic LV strain rate (GLSRe, GLSRa) curves of LA were acquired in each LA segment from basal segment to top segment of LA by 2D-STE. Compared with group A, there were no differences between group B and group A (all P > 0.05). The LAAEF and GLSRa were significantly higher in group C compared with those of group A and group B (all P < 0.01). The LAPEF, GLSs, GLSRs and GLSRe were significantly lower in group C compared with those of group A and group B (all P < 0.01). 2D-STE could evaluate the LA function in patients with IDA.
The High School Dropout Problem: Perspectives of Teachers and Principals
ERIC Educational Resources Information Center
Bridgeland, John M.; Dilulio, John J., Jr.; Balfanz, Robert
2009-01-01
To better understand the views of teachers and administrators on the high school dropout problem, focus groups and nationally representative surveys were conducted of high school teachers and principals. A focus group of superintendents and school board members was also included. To help interpret the results, the authors convened a colloquium…
2-dimensional implicit hydrodynamics on adaptive grids
NASA Astrophysics Data System (ADS)
Stökl, A.; Dorfi, E. A.
2007-12-01
We present a numerical scheme for two-dimensional hydrodynamics computations using a 2D adaptive grid together with an implicit discretization. The combination of these techniques has offered favorable numerical properties applicable to a variety of one-dimensional astrophysical problems which motivated us to generalize this approach for two-dimensional applications. Due to the different topological nature of 2D grids compared to 1D problems, grid adaptivity has to avoid severe grid distortions which necessitates additional smoothing parameters to be included into the formulation of a 2D adaptive grid. The concept of adaptivity is described in detail and several test computations demonstrate the effectivity of smoothing. The coupled solution of this grid equation together with the equations of hydrodynamics is illustrated by computation of a 2D shock tube problem.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
High-Order Methods for Incompressible Fluid Flow
NASA Astrophysics Data System (ADS)
Deville, M. O.; Fischer, P. F.; Mund, E. H.
2002-08-01
High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.
NASA Astrophysics Data System (ADS)
Başkal, Sibel
2015-11-01
This book explains the Lorentz mathematical group in a language familiar to physicists. While the three-dimensional rotation group is one of the standard mathematical tools in physics, the Lorentz group of the four-dimensional Minkowski space is still very strange to most present-day physicists. It plays an essential role in understanding particles moving at close to light speed and is becoming the essential language for quantum optics, classical optics, and information science. The book is based on papers and books published by the authors on the representations of the Lorentz group based on harmonic oscillators and their applications to high-energy physics and to Wigner functions applicable to quantum optics. It also covers the two-by-two representations of the Lorentz group applicable to ray optics, including cavity, multilayer and lens optics, as well as representations of the Lorentz group applicable to Stokes parameters and the Poincaré sphere on polarization optics.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
Highly Parallel Alternating Directions Algorithm for Time Dependent Problems
NASA Astrophysics Data System (ADS)
Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.
2011-11-01
In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.
QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.
Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Unsupervised universal steganalyzer for high-dimensional steganalytic features
NASA Astrophysics Data System (ADS)
Hou, Xiaodan; Zhang, Tao
2016-11-01
The research in developing steganalytic features has been highly successful. These features are extremely powerful when applied to supervised binary classification problems. However, they are incompatible with unsupervised universal steganalysis because the unsupervised method cannot distinguish embedding distortion from varying levels of noises caused by cover variation. This study attempts to alleviate the problem by introducing similarity retrieval of image statistical properties (SRISP), with the specific aim of mitigating the effect of cover variation on the existing steganalytic features. First, cover images with some statistical properties similar to those of a given test image are searched from a retrieval cover database to establish an aided sample set. Then, unsupervised outlier detection is performed on a test set composed of the given test image and its aided sample set to determine the type (cover or stego) of the given test image. Our proposed framework, called SRISP-aided unsupervised outlier detection, requires no training. Thus, it does not suffer from model mismatch mess. Compared with prior unsupervised outlier detectors that do not consider SRISP, the proposed framework not only retains the universality but also exhibits superior performance when applied to high-dimensional steganalytic features.
NASA Astrophysics Data System (ADS)
Eric, L.; Vrugt, J. A.
2010-12-01
Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.
Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem
NASA Astrophysics Data System (ADS)
Man, J.; Li, W.; Zeng, L.; Wu, L.
2015-12-01
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
NASA Astrophysics Data System (ADS)
Lee, Hyunki; Kim, Min Young; Moon, Jeon Il
2017-12-01
Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.
Comparative study of high-resolution shock-capturing schemes for a real gas
NASA Technical Reports Server (NTRS)
Montagne, J.-L.; Yee, H. C.; Vinokur, M.
1987-01-01
Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.
NASA Astrophysics Data System (ADS)
Lazarowitz, Reuven; Naim, Raphael
2013-08-01
The cell topic was taught to 9th-grade students in three modes of instruction: (a) students "hands-on," who constructed three-dimensional cell organelles and macromolecules during the learning process; (b) teacher demonstration of the three-dimensional model of the cell structures; and (c) teaching the cell topic with the regular learning material in an expository mode (which use one- or two-dimensional cell structures as are presented in charts, textbooks and microscopic slides). The sample included 669, 9th-grade students from 25 classes who were taught by 22 Biology teachers. Students were randomly assigned to the three modes of instruction, and two tests in content knowledge in Biology were used. Data were treated with multiple analyses of variance. The results indicate that entry behavior in Biology was equal for all the study groups and types of schools. The "hands-on" learning group who build three-dimensional models through the learning process achieved significantly higher on academic achievements and on the high and low cognitive questions' levels than the other two groups. The study indicates the advantages students may have being actively engaged in the learning process through the "hands-on" mode of instruction/learning.
Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence
NASA Astrophysics Data System (ADS)
Smith, Stephen Andrew
1997-11-01
Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.
Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088
2007-01-15
The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
The identification of the geometrical structure of the system boundary for a two-dimensional diffusion system is reported. The domain identification problem treated here is converted into an optimization problem based on a fit-to-data criterion and theoretical convergence results for approximate identification techniques are discussed. Results of numerical experiments to demonstrate the efficacy of the theoretical ideas are reported.
A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions
NASA Astrophysics Data System (ADS)
Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.
Artificial intelligence and robotics in high throughput post-genomics.
Laghaee, Aroosha; Malcolm, Chris; Hallam, John; Ghazal, Peter
2005-09-15
The shift of post-genomics towards a systems approach has offered an ever-increasing role for artificial intelligence (AI) and robotics. Many disciplines (e.g. engineering, robotics, computer science) bear on the problem of automating the different stages involved in post-genomic research with a view to developing quality assured high-dimensional data. We review some of the latest contributions of AI and robotics to this end and note the limitations arising from the current independent, exploratory way in which specific solutions are being presented for specific problems without regard to how these could be eventually integrated into one comprehensible integrated intelligent system.
Cluster ensemble based on Random Forests for genetic data.
Alhusain, Luluah; Hafez, Alaaeldin M
2017-01-01
Clustering plays a crucial role in several application domains, such as bioinformatics. In bioinformatics, clustering has been extensively used as an approach for detecting interesting patterns in genetic data. One application is population structure analysis, which aims to group individuals into subpopulations based on shared genetic variations, such as single nucleotide polymorphisms. Advances in DNA sequencing technology have facilitated the obtainment of genetic datasets with exceptional sizes. Genetic data usually contain hundreds of thousands of genetic markers genotyped for thousands of individuals, making an efficient means for handling such data desirable. Random Forests (RFs) has emerged as an efficient algorithm capable of handling high-dimensional data. RFs provides a proximity measure that can capture different levels of co-occurring relationships between variables. RFs has been widely considered a supervised learning method, although it can be converted into an unsupervised learning method. Therefore, RF-derived proximity measure combined with a clustering technique may be well suited for determining the underlying structure of unlabeled data. This paper proposes, RFcluE, a cluster ensemble approach for determining the underlying structure of genetic data based on RFs. The approach comprises a cluster ensemble framework to combine multiple runs of RF clustering. Experiments were conducted on high-dimensional, real genetic dataset to evaluate the proposed approach. The experiments included an examination of the impact of parameter changes, comparing RFcluE performance against other clustering methods, and an assessment of the relationship between the diversity and quality of the ensemble and its effect on RFcluE performance. This paper proposes, RFcluE, a cluster ensemble approach based on RF clustering to address the problem of population structure analysis and demonstrate the effectiveness of the approach. The paper also illustrates that applying a cluster ensemble approach, combining multiple RF clusterings, produces more robust and higher-quality results as a consequence of feeding the ensemble with diverse views of high-dimensional genetic data obtained through bagging and random subspace, the two key features of the RF algorithm.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1988-01-01
The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.
Plane Poiseuille flow of a rarefied gas in the presence of strong gravitation.
Doi, Toshiyuki
2011-02-01
Plane Poiseuille flow of a rarefied gas, which flows horizontally in the presence of strong gravitation, is studied based on the Boltzmann equation. Applying the asymptotic analysis for a small variation in the flow direction [Y. Sone, Molecular Gas Dynamics (Birkhäuser, 2007)], the two-dimensional problem is reduced to a one-dimensional problem, as in the case of a Poiseuille flow in the absence of gravitation, and the solution is obtained in a semianalytical form. The reduced one-dimensional problem is solved numerically for a hard sphere molecular gas over a wide range of the gas-rarefaction degree and the gravitational strength. The presence of gravitation reduces the mass flow rate, and the effect of gravitation is significant for large Knudsen numbers. To verify the validity of the asymptotic solution, a two-dimensional problem of a flow through a long channel is directly solved numerically, and the validity of the asymptotic solution is confirmed. ©2011 American Physical Society
NASA Technical Reports Server (NTRS)
Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.
1982-01-01
Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.
Research and implementation of group animation based on normal cloud model
NASA Astrophysics Data System (ADS)
Li, Min; Wei, Bin; Peng, Bao
2011-12-01
Group Animation is a difficult technology problem which always has not been solved in computer Animation technology, All current methods have their limitations. This paper put forward a method: the Motion Coordinate and Motion Speed of true fish group was collected as sample data, reverse cloud generator was designed and run, expectation, entropy and super entropy are gotten. Which are quantitative value of qualitative concept. These parameters are used as basis, forward cloud generator was designed and run, Motion Coordinate and Motion Speed of two-dimensional fish group animation are produced, And two spirit state variable about fish group : the feeling of hunger, the feeling of fear are designed. Experiment is used to simulated the motion state of fish Group Animation which is affected by internal cause and external cause above, The experiment shows that the Group Animation which is designed by this method has strong Realistic.
Using sketch-map coordinates to analyze and bias molecular dynamics simulations
Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele
2012-01-01
When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357
NASA Astrophysics Data System (ADS)
Carmack, Gay Lynn Dickinson
2000-10-01
This two-part quasi-experimental repeated measures study examined whether computer simulated experiments have an effect on the problem solving skills of high school biology students in a school-within-a-school magnet program. Specifically, the study identified episodes in a simulation sequence where problem solving skills improved. In the Fall academic semester, experimental group students (n = 30) were exposed to two simulations: CaseIt! and EVOLVE!. Control group students participated in an internet research project and a paper Hardy-Weinberg activity. In the Spring academic semester, experimental group students were exposed to three simulations: Genetics Construction Kit, CaseIt! and EVOLVE! . Spring control group students participated in a Drosophila lab, an internet research project, and Advanced Placement lab 8. Results indicate that the Fall and Spring experimental groups experienced significant gains in scientific problem solving after the second simulation in the sequence. These gains were independent of the simulation sequence or the amount of time spent on the simulations. These gains were significantly greater than control group scores in the Fall. The Spring control group significantly outscored all other study groups on both pretest measures. Even so, the Spring experimental group problem solving performance caught up to the Spring control group performance after the third simulation. There were no significant differences between control and experimental groups on content achievement. Results indicate that CSE is as effective as traditional laboratories in promoting scientific problem solving and that CSE is a useful tool for improving students' scientific problem solving skills. Moreover, retention of problem solving skills is enhanced by utilizing more than one simulation.
NASA Astrophysics Data System (ADS)
Korolev, A. M.; Shulga, V. M.; Turutanov, O. G.; Shnyrkov, V. I.
2016-07-01
A technically simple and physically clear method is suggested for direct measurement of the brightness temperature of two-dimensional electron gas (2DEG) in the channel of a high electron mobility transistor (HEMT). The usage of the method was demonstrated with the pseudomorphic HEMT as a specimen. The optimal HEMT dc regime, from the point of view of the "back action" problem, was found to belong to the unsaturated area of the static characteristics possibly corresponding to the ballistic electron transport mode. The proposed method is believed to be a convenient tool to explore the ballistic transport, electron diffusion, 2DEG properties and other electrophysical processes in heterostructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Petersson, N. A.; Rodgers, A.
Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity
NASA Astrophysics Data System (ADS)
Gómez, D.; Nazarov, S. A.; Pérez, M. E.
2018-04-01
We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.
Teng, Dongdong; Xiong, Yi; Liu, Lilin; Wang, Biao
2015-03-09
Existing multiview three-dimensional (3D) display technologies encounter discontinuous motion parallax problem, due to a limited number of stereo-images which are presented to corresponding sub-viewing zones (SVZs). This paper proposes a novel multiview 3D display system to obtain continuous motion parallax by using a group of planar aligned OLED microdisplays. Through blocking partial light-rays by baffles inserted between adjacent OLED microdisplays, transitional stereo-image assembled by two spatially complementary segments from adjacent stereo-images is presented to a complementary fusing zone (CFZ) which locates between two adjacent SVZs. For a moving observation point, the spatial ratio of the two complementary segments evolves gradually, resulting in continuously changing transitional stereo-images and thus overcoming the problem of discontinuous motion parallax. The proposed display system employs projection-type architecture, taking the merit of full display resolution, but at the same time having a thin optical structure, offering great potentials for portable or mobile 3D display applications. Experimentally, a prototype display system is demonstrated by 9 OLED microdisplays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
NASA Astrophysics Data System (ADS)
Matsevityi, Yu. M.; Alekhina, S. V.; Borukhov, V. T.; Zayats, G. M.; Kostikov, A. O.
2017-11-01
The problem of identifying the time-dependent thermal conductivity coefficient in the initial-boundary-value problem for the quasi-stationary two-dimensional heat conduction equation in a bounded cylinder is considered. It is assumed that the temperature field in the cylinder is independent of the angular coordinate. To solve the given problem, which is related to a class of inverse problems, a mathematical approach based on the method of conjugate gradients in a functional form is being developed.
High-Order Methods for Computational Physics
1999-03-01
computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…