Sample records for finite set statistics

  1. Skewness and kurtosis analysis for non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  2. Specialized Finite Set Statistics (FISST)-Based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO)

    DTIC Science & Technology

    2016-08-17

    Research Laboratory AFRL /RVSV Space Vehicles Directorate 3550 Aberdeen Ave, SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV...1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSV/Richard S. Erwin 1 cy... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0114 TR-2016-0114 SPECIALIZED FINITE SET STATISTICS (FISST)- BASED ESTIMATION METHODS TO ENHANCE SPACE SITUATIONAL

  3. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    NASA Astrophysics Data System (ADS)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  4. New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2008-01-01

    A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.

  5. On the inequivalence of the CH and CHSH inequalities due to finite statistics

    NASA Astrophysics Data System (ADS)

    Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.

    2017-06-01

    Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.

  6. A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1.

    PubMed

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-12-14

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds.

  7. A Comparison of Error Bounds for a Nonlinear Tracking System with Detection Probability Pd < 1

    PubMed Central

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-01-01

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds. PMID:23242274

  8. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    PubMed

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.

  9. Development and Validation of a Statistical Shape Modeling-Based Finite Element Model of the Cervical Spine Under Low-Level Multiple Direction Loading Conditions

    PubMed Central

    Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.

    2014-01-01

    Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051

  10. A probabilistic analysis of electrical equipment vulnerability to carbon fibers

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    The statistical problems of airborne carbon fibers falling onto electrical circuits were idealized and analyzed. The probability of making contact between randomly oriented finite length fibers and sets of parallel conductors with various spacings and lengths was developed theoretically. The probability of multiple fibers joining to bridge a single gap between conductors, or forming continuous networks is included. From these theoretical considerations, practical statistical analyses to assess the likelihood of causing electrical malfunctions was produced. The statistics obtained were confirmed by comparison with results of controlled experiments.

  11. Space Object Detection and Tracking Within a Finite Set Statistics Framework

    DTIC Science & Technology

    2017-04-13

    Software for source extraction. Astronomy and Astrophysics Supplement Series, 117(2):393–404, 1996. [4] William M. Bolstad. Introduction to Bayesian...Urban, T Corbin, G Wycoff, Ulrich Bastian, Peter Schwekendiek, and A Wicenec. The tycho-2 catalogue of the 2.5 million brightest stars. Astronomy and

  12. On the Efficient Allocation of Resources for Hypothesis Evaluation in Machine Learning: A Statistical Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.; Burl, M.

    1994-01-01

    In this report we consider a decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g., a finite number of observations): the system can, however, gather additional information at some cost.

  13. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  14. Application of closed-form solutions to a mesh point field in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1985-01-01

    A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.

  15. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  16. A simple branching model that reproduces language family and language population distributions

    NASA Astrophysics Data System (ADS)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  17. Features of statistical dynamics in a finite system

    NASA Astrophysics Data System (ADS)

    Yan, Shiwei; Sakata, Fumihiko; Zhuo, Yizhong

    2002-03-01

    We study features of statistical dynamics in a finite Hamilton system composed of a relevant one degree of freedom coupled to an irrelevant multidegree of freedom system through a weak interaction. Special attention is paid on how the statistical dynamics changes depending on the number of degrees of freedom in the irrelevant system. It is found that the macrolevel statistical aspects are strongly related to an appearance of the microlevel chaotic motion, and a dissipation of the relevant motion is realized passing through three distinct stages: dephasing, statistical relaxation, and equilibrium regimes. It is clarified that the dynamical description and the conventional transport approach provide us with almost the same macrolevel and microlevel mechanisms only for the system with a very large number of irrelevant degrees of freedom. It is also shown that the statistical relaxation in the finite system is an anomalous diffusion and the fluctuation effects have a finite correlation time.

  18. Features of statistical dynamics in a finite system.

    PubMed

    Yan, Shiwei; Sakata, Fumihiko; Zhuo, Yizhong

    2002-03-01

    We study features of statistical dynamics in a finite Hamilton system composed of a relevant one degree of freedom coupled to an irrelevant multidegree of freedom system through a weak interaction. Special attention is paid on how the statistical dynamics changes depending on the number of degrees of freedom in the irrelevant system. It is found that the macrolevel statistical aspects are strongly related to an appearance of the microlevel chaotic motion, and a dissipation of the relevant motion is realized passing through three distinct stages: dephasing, statistical relaxation, and equilibrium regimes. It is clarified that the dynamical description and the conventional transport approach provide us with almost the same macrolevel and microlevel mechanisms only for the system with a very large number of irrelevant degrees of freedom. It is also shown that the statistical relaxation in the finite system is an anomalous diffusion and the fluctuation effects have a finite correlation time.

  19. Computer Administering of the Psychological Investigations: Set-Relational Representation

    NASA Astrophysics Data System (ADS)

    Yordzhev, Krasimir

    Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.

  20. An Integrated approach to the Space Situational Awareness Problem

    DTIC Science & Technology

    2016-12-15

    data coming from the sensors. We developed particle-based Gaussian Mixture Filters that are immune to the “curse of dimensionality”/ “particle...depletion” problem inherent in particle filtering . This method maps the data assimilation/ filtering problem into an unsupervised learning problem. Results...Gaussian Mixture Filters ; particle depletion; Finite Set Statistics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 1

  1. A Mechanical Power Flow Capability for the Finite Element Code NASTRAN

    DTIC Science & Technology

    1989-07-01

    perimental methods. statistical energy analysis , the finite element method, and a finite element analog-,y using heat conduction equations. Experimental...weights and inertias of the transducers attached to an experimental structure may produce accuracy problems. Statistical energy analysis (SEA) is a...405-422 (1987). 8. Lyon, R.L., Statistical Energy Analysis of Dynamical Sistems, The M.I.T. Press, (1975). 9. Mickol, J.D., and R.J. Bernhard, "An

  2. Higher-order nonclassicalities of finite dimensional coherent states: A comparative study

    NASA Astrophysics Data System (ADS)

    Alam, Nasir; Verma, Amit; Pathak, Anirban

    2018-07-01

    Conventional coherent states (CSs) are defined in various ways. For example, CS is defined as an infinite Poissonian expansion in Fock states, as displaced vacuum state, or as an eigenket of annihilation operator. In the infinite dimensional Hilbert space, these definitions are equivalent. However, these definitions are not equivalent for the finite dimensional systems. In this work, we present a comparative description of the lower- and higher-order nonclassical properties of the finite dimensional CSs which are also referred to as qudit CSs (QCSs). For the comparison, nonclassical properties of two types of QCSs are used: (i) nonlinear QCS produced by applying a truncated displacement operator on the vacuum and (ii) linear QCS produced by the Poissonian expansion in Fock states of the CS truncated at (d - 1)-photon Fock state. The comparison is performed using a set of nonclassicality witnesses (e.g., higher order antibunching, higher order sub-Poissonian statistics, higher order squeezing, Agarwal-Tara parameter, Klyshko's criterion) and a set of quantitative measures of nonclassicality (e.g., negativity potential, concurrence potential and anticlassicality). The higher order nonclassicality witnesses have found to reveal the existence of higher order nonclassical properties of QCS for the first time.

  3. A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; MacMillan, R.

    2014-09-01

    Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.

  4. Complexity transitions in global algorithms for sparse linear systems over finite fields

    NASA Astrophysics Data System (ADS)

    Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.

    2002-09-01

    We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.

  5. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  6. Rank-based permutation approaches for non-parametric factorial designs.

    PubMed

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  7. MULTISCALE ADAPTIVE SMOOTHING MODELS FOR THE HEMODYNAMIC RESPONSE FUNCTION IN FMRI*

    PubMed Central

    Wang, Jiaping; Zhu, Hongtu; Fan, Jianqing; Giovanello, Kelly; Lin, Weili

    2012-01-01

    In the event-related functional magnetic resonance imaging (fMRI) data analysis, there is an extensive interest in accurately and robustly estimating the hemodynamic response function (HRF) and its associated statistics (e.g., the magnitude and duration of the activation). Most methods to date are developed in the time domain and they have utilized almost exclusively the temporal information of fMRI data without accounting for the spatial information. The aim of this paper is to develop a multiscale adaptive smoothing model (MASM) in the frequency domain by integrating the spatial and temporal information to adaptively and accurately estimate HRFs pertaining to each stimulus sequence across all voxels in a three-dimensional (3D) volume. We use two sets of simulation studies and a real data set to examine the finite sample performance of MASM in estimating HRFs. Our real and simulated data analyses confirm that MASM outperforms several other state-of-art methods, such as the smooth finite impulse response (sFIR) model. PMID:24533041

  8. A Random Finite Set Approach to Space Junk Tracking and Identification

    DTIC Science & Technology

    2014-09-03

    Final 3. DATES COVERED (From - To) 31 Jan 13 – 29 Apr 14 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and...01-2013 to 29-04-2014 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and Identification 5a. CONTRACT NUMBER FA2386-13...Prescribed by ANSI Std Z39-18 A Random Finite Set Approach to Space Junk Tracking and Indentification Ba-Ngu Vo1, Ba-Tuong Vo1, 1Department of

  9. Statistical Energy Analysis (SEA) and Energy Finite Element Analysis (EFEA) Predictions for a Floor-Equipped Composite Cylinder

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.

    2011-01-01

    Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.

  10. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  11. A comment on measuring the Hurst exponent of financial time series

    NASA Astrophysics Data System (ADS)

    Couillard, Michel; Davison, Matt

    2005-03-01

    A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.

  12. The Generalized Higher Criticism for Testing SNP-Set Effects in Genetic Association Studies

    PubMed Central

    Barnett, Ian; Mukherjee, Rajarshi; Lin, Xihong

    2017-01-01

    It is of substantial interest to study the effects of genes, genetic pathways, and networks on the risk of complex diseases. These genetic constructs each contain multiple SNPs, which are often correlated and function jointly, and might be large in number. However, only a sparse subset of SNPs in a genetic construct is generally associated with the disease of interest. In this article, we propose the generalized higher criticism (GHC) to test for the association between an SNP set and a disease outcome. The higher criticism is a test traditionally used in high-dimensional signal detection settings when marginal test statistics are independent and the number of parameters is very large. However, these assumptions do not always hold in genetic association studies, due to linkage disequilibrium among SNPs and the finite number of SNPs in an SNP set in each genetic construct. The proposed GHC overcomes the limitations of the higher criticism by allowing for arbitrary correlation structures among the SNPs in an SNP-set, while performing accurate analytic p-value calculations for any finite number of SNPs in the SNP-set. We obtain the detection boundary of the GHC test. We compared empirically using simulations the power of the GHC method with existing SNP-set tests over a range of genetic regions with varied correlation structures and signal sparsity. We apply the proposed methods to analyze the CGEM breast cancer genome-wide association study. Supplementary materials for this article are available online. PMID:28736464

  13. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    USGS Publications Warehouse

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  14. TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey

    2003-01-01

    We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.

  15. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    PubMed

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.

  16. Adaptive disturbance compensation finite control set optimal control for PMSM systems based on sliding mode extended state observer

    NASA Astrophysics Data System (ADS)

    Wu, Yun-jie; Li, Guo-fei

    2018-01-01

    Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.

  17. Identifiability of PBPK Models with Applications to ...

    EPA Pesticide Factsheets

    Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss different types of identifiability that occur in PBPK models and give reasons why they occur. We particularly focus on how the mathematical structure of a PBPK model and lack of appropriate data can lead to statistical models in which it is impossible to estimate at least some parameters precisely. Methods are reviewed which can determine whether a purely linear PBPK model is globally identifiable. We propose a theorem which determines when identifiability at a set of finite and specific values of the mathematical PBPK model (global discrete identifiability) implies identifiability of the statistical model. However, we are unable to establish conditions that imply global discrete identifiability, and conclude that the only safe approach to analysis of PBPK models involves Bayesian analysis with truncated priors. Finally, computational issues regarding posterior simulations of PBPK models are discussed. The methodology is very general and can be applied to numerous PBPK models which can be expressed as linear time-invariant systems. A real data set of a PBPK model for exposure to dimethyl arsinic acid (DMA(V)) is presented to illustrate the proposed methodology. We consider statistical analy

  18. Compressing random microstructures via stochastic Wang tilings.

    PubMed

    Novák, Jan; Kučerová, Anna; Zeman, Jan

    2012-10-01

    This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.

  19. Average dynamics of a finite set of coupled phase oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dima, Germán C., E-mail: gdima@df.uba.ar; Mindlin, Gabriel B.

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  20. Average dynamics of a finite set of coupled phase oscillators

    PubMed Central

    Dima, Germán C.; Mindlin, Gabriel B.

    2014-01-01

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate. PMID:24985426

  1. Average dynamics of a finite set of coupled phase oscillators.

    PubMed

    Dima, Germán C; Mindlin, Gabriel B

    2014-06-01

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  2. Finite-data-size study on practical universal blind quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Li, Qiong

    2018-07-01

    The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.

  3. Bell Correlations in a Many-Body System with Finite Statistics

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Schmied, Roman; Fadel, Matteo; Treutlein, Philipp; Sangouard, Nicolas; Bancal, Jean-Daniel

    2017-10-01

    A recent experiment reported the first violation of a Bell correlation witness in a many-body system [Science 352, 441 (2016)]. Following discussions in this Letter, we address here the question of the statistics required to witness Bell correlated states, i.e., states violating a Bell inequality, in such experiments. We start by deriving multipartite Bell inequalities involving an arbitrary number of measurement settings, two outcomes per party and one- and two-body correlators only. Based on these inequalities, we then build up improved witnesses able to detect Bell correlated states in many-body systems using two collective measurements only. These witnesses can potentially detect Bell correlations in states with an arbitrarily low amount of spin squeezing. We then establish an upper bound on the statistics needed to convincingly conclude that a measured state is Bell correlated.

  4. Bell Correlations in a Many-Body System with Finite Statistics.

    PubMed

    Wagner, Sebastian; Schmied, Roman; Fadel, Matteo; Treutlein, Philipp; Sangouard, Nicolas; Bancal, Jean-Daniel

    2017-10-27

    A recent experiment reported the first violation of a Bell correlation witness in a many-body system [Science 352, 441 (2016)]. Following discussions in this Letter, we address here the question of the statistics required to witness Bell correlated states, i.e., states violating a Bell inequality, in such experiments. We start by deriving multipartite Bell inequalities involving an arbitrary number of measurement settings, two outcomes per party and one- and two-body correlators only. Based on these inequalities, we then build up improved witnesses able to detect Bell correlated states in many-body systems using two collective measurements only. These witnesses can potentially detect Bell correlations in states with an arbitrarily low amount of spin squeezing. We then establish an upper bound on the statistics needed to convincingly conclude that a measured state is Bell correlated.

  5. Biomechanical influence of crown-to-implant ratio on stress distribution over internal hexagon short implant: 3-D finite element analysis with statistical test.

    PubMed

    Ramos Verri, Fellippo; Santiago Junior, Joel Ferreira; de Faria Almeida, Daniel Augusto; de Oliveira, Guilherme Bérgamo Brandão; de Souza Batista, Victor Eduardo; Marques Honório, Heitor; Noritomi, Pedro Yoshito; Pellizzer, Eduardo Piza

    2015-01-02

    The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p<0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p>0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p<0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p<0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  7. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  8. Development of Composite Materials with High Passive Damping Properties

    DTIC Science & Technology

    2006-05-15

    frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies

  9. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Continuous family of finite-dimensional representations of a solvable Lie algebra arising from singularities

    PubMed Central

    Yau, Stephen S.-T.

    1983-01-01

    A natural mapping from the set of complex analytic isolated hypersurface singularities to the set of finite dimensional Lie algebras is first defined. It is proven that the image under this natural mapping is contained in the set of solvable Lie algebras. This approach gives rise to a continuous inequivalent family of finite dimensional representations of a solvable Lie algebra. PMID:16593401

  11. Realistic finite temperature simulations of magnetic systems using quantum statistics

    NASA Astrophysics Data System (ADS)

    Bergqvist, Lars; Bergman, Anders

    2018-01-01

    We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.

  12. A Wave Chaotic Study of Quantum Graphs with Microwave Networks

    NASA Astrophysics Data System (ADS)

    Fu, Ziyuan

    Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.

  13. Avalanches, loading and finite size effects in 2D amorphous plasticity: results from a finite element model

    NASA Astrophysics Data System (ADS)

    Sandfeld, Stefan; Budrikis, Zoe; Zapperi, Stefano; Fernandez Castellanos, David

    2015-02-01

    Crystalline plasticity is strongly interlinked with dislocation mechanics and nowadays is relatively well understood. Concepts and physical models of plastic deformation in amorphous materials on the other hand—where the concept of linear lattice defects is not applicable—still are lagging behind. We introduce an eigenstrain-based finite element lattice model for simulations of shear band formation and strain avalanches. Our model allows us to study the influence of surfaces and finite size effects on the statistics of avalanches. We find that even with relatively complex loading conditions and open boundary conditions, critical exponents describing avalanche statistics are unchanged, which validates the use of simpler scalar lattice-based models to study these phenomena.

  14. A Finite-Volume "Shaving" Method for Interfacing NASA/DAO''s Physical Space Statistical Analysis System to the Finite-Volume GCM with a Lagrangian Control-Volume Vertical Coordinate

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; DaSilva, Arlindo; Atlas, Robert (Technical Monitor)

    2001-01-01

    Toward the development of a finite-volume Data Assimilation System (fvDAS), a consistent finite-volume methodology is developed for interfacing the NASA/DAO's Physical Space Statistical Analysis System (PSAS) to the joint NASA/NCAR finite volume CCM3 (fvCCM3). To take advantage of the Lagrangian control-volume vertical coordinate of the fvCCM3, a novel "shaving" method is applied to the lowest few model layers to reflect the surface pressure changes as implied by the final analysis. Analysis increments (from PSAS) to the upper air variables are then consistently put onto the Lagrangian layers as adjustments to the volume-mean quantities during the analysis cycle. This approach is demonstrated to be superior to the conventional method of using independently computed "tendency terms" for surface pressure and upper air prognostic variables.

  15. Quantum Adiabatic Optimization and Combinatorial Landscapes

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.

    2003-01-01

    In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.

  16. Determination of apparent coupling factors for adhesive bonded acrylic plates using SEAL approach

    NASA Astrophysics Data System (ADS)

    Pankaj, Achuthan. C.; Shivaprasad, M. V.; Murigendrappa, S. M.

    2018-04-01

    Apparent coupling loss factors (CLF) and velocity responses has been computed for two lap joined adhesive bonded plates using finite element and experimental statistical energy analysis like approach. A finite element model of the plates has been created using ANSYS software. The statistical energy parameters have been computed using the velocity responses obtained from a harmonic forced excitation analysis. Experiments have been carried out for two different cases of adhesive bonded joints and the results have been compared with the apparent coupling factors and velocity responses obtained from finite element analysis. The results obtained from the studies signify the importance of modeling of adhesive bonded joints in computation of the apparent coupling factors and its further use in computation of energies and velocity responses using statistical energy analysis like approach.

  17. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  18. Effects of a finite outer scale on the measurement of atmospheric-turbulence statistics with a Hartmann wave-front sensor.

    PubMed

    Feng, Shen; Wenhan, Jiang

    2002-06-10

    Phase-structure and aperture-averaged slope-correlated functions with a finite outer scale are derived based on the Taylor hypothesis and a generalized spectrum, such as the von Kármán modal. The effects of the finite outer scale on measuring and determining the character of atmospheric-turbulence statistics are shown especially for an approximately 4-m class telescope and subaperture. The phase structure function and atmospheric coherent length based on the Kolmogorov model are approximations of the formalism we have derived. The analysis shows that it cannot be determined whether the deviation from the power-law parameter of Kolmogorov turbulence is caused by real variations of the spectrum or by the effect of the finite outer scale.

  19. Finite Element Analysis of Reverberation Chambers

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.; Nguyen, Duc T.

    2000-01-01

    The primary motivating factor behind the initiation of this work was to provide a deterministic means of establishing the validity of the statistical methods that are recommended for the determination of fields that interact in -an avionics system. The application of finite element analysis to reverberation chambers is the initial step required to establish a reasonable course of inquiry in this particularly data-intensive study. The use of computational electromagnetics provides a high degree of control of the "experimental" parameters that can be utilized in a simulation of reverberating structures. As the work evolved there were four primary focus areas they are: 1. The eigenvalue problem for the source free problem. 2. The development of a complex efficient eigensolver. 3. The application of a source for the TE and TM fields for statistical characterization. 4. The examination of shielding effectiveness in a reverberating environment. One early purpose of this work was to establish the utility of finite element techniques in the development of an extended low frequency statistical model for reverberation phenomena. By employing finite element techniques, structures of arbitrary complexity can be analyzed due to the use of triangular shape functions in the spatial discretization. The effects of both frequency stirring and mechanical stirring are presented. It is suggested that for the low frequency operation the typical tuner size is inadequate to provide a sufficiently random field and that frequency stirring should be used. The results of the finite element analysis of the reverberation chamber illustrate io-W the potential utility of a 2D representation for enhancing the basic statistical characteristics of the chamber when operating in a low frequency regime. The basic field statistics are verified for frequency stirring over a wide range of frequencies. Mechanical stirring is shown to provide an effective frequency deviation.

  20. The Transition from Comparison of Finite to the Comparison of Infinite Sets: Teaching Prospective Teachers.

    ERIC Educational Resources Information Center

    Tsamir, Pessia

    1999-01-01

    Describes a course in Cantorian Set Theory relating to prospective secondary mathematics teachers' tendencies to overgeneralize from finite to infinite sets. Indicates that when comparing the number of elements in infinite sets, teachers who took the course were more successful and more consistent in their use of single method than those who…

  1. Hybrid phase transition into an absorbing state: Percolation and avalanches

    NASA Astrophysics Data System (ADS)

    Lee, Deokjae; Choi, S.; Stippinger, M.; Kertész, J.; Kahng, B.

    2016-04-01

    Interdependent networks are more fragile under random attacks than simplex networks, because interlayer dependencies lead to cascading failures and finally to a sudden collapse. This is a hybrid phase transition (HPT), meaning that at the transition point the order parameter has a jump but there are also critical phenomena related to it. Here we study these phenomena on the Erdős-Rényi and the two-dimensional interdependent networks and show that the hybrid percolation transition exhibits two kinds of critical behaviors: divergence of the fluctuations of the order parameter and power-law size distribution of finite avalanches at a transition point. At the transition point global or "infinite" avalanches occur, while the finite ones have a power law size distribution; thus the avalanche statistics also has the nature of a HPT. The exponent βm of the order parameter is 1 /2 under general conditions, while the value of the exponent γm characterizing the fluctuations of the order parameter depends on the system. The critical behavior of the finite avalanches can be described by another set of exponents, βa and γa. These two critical behaviors are coupled by a scaling law: 1 -βm=γa .

  2. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  3. Renyi Entropy of the Ideal Gas in Finite Momentum Intervals

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.

    2003-06-01

    Coincidence probabilities of multiparticle events, as measured in finite momentum intervals for Bose and Fermi ideal gas, are calculated and compared with the exact expressions given in statistical physics.

  4. Investigation of the Statistics of Pure Tone Sound Power Injection from Low Frequency, Finite Sized Sources in a Reverberant Room

    NASA Technical Reports Server (NTRS)

    Smith, Wayne Farrior

    1973-01-01

    The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.

  5. First passage times in homogeneous nucleation: Dependence on the total number of particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yvinec, Romain; Bernard, Samuel; Pujo-Menjouet, Laurent

    2016-01-21

    Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find thatmore » the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.« less

  6. First passage times in homogeneous nucleation: Dependence on the total number of particles

    NASA Astrophysics Data System (ADS)

    Yvinec, Romain; Bernard, Samuel; Hingant, Erwan; Pujo-Menjouet, Laurent

    2016-01-01

    Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory.

  7. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  8. Condensation and critical exponents of an ideal non-Abelian gas

    NASA Astrophysics Data System (ADS)

    Talaei, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein

    2017-11-01

    We investigate an ideal gas obeying non-Abelian statistics and derive the expressions for some thermodynamic quantities. It is found that thermodynamic quantities are finite at the condensation point where their derivatives diverge and, near this point, they behave as \\vert T-Tc\\vert^{-ρ} in which Tc denotes the condensation temperature and ρ is a critical exponent. The critical exponents related to the heat capacity and compressibility are obtained by fitting numerical results and others are obtained using the scaling law hypothesis for a three-dimensional non-Abelian ideal gas. This set of critical exponents introduces a new universality class.

  9. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  10. Experimental Determination of Dynamical Lee-Yang Zeros

    NASA Astrophysics Data System (ADS)

    Brandner, Kay; Maisi, Ville F.; Pekola, Jukka P.; Garrahan, Juan P.; Flindt, Christian

    2017-05-01

    Statistical physics provides the concepts and methods to explain the phase behavior of interacting many-body systems. Investigations of Lee-Yang zeros—complex singularities of the free energy in systems of finite size—have led to a unified understanding of equilibrium phase transitions. The ideas of Lee and Yang, however, are not restricted to equilibrium phenomena. Recently, Lee-Yang zeros have been used to characterize nonequilibrium processes such as dynamical phase transitions in quantum systems after a quench or dynamic order-disorder transitions in glasses. Here, we experimentally realize a scheme for determining Lee-Yang zeros in such nonequilibrium settings. We extract the dynamical Lee-Yang zeros of a stochastic process involving Andreev tunneling between a normal-state island and two superconducting leads from measurements of the dynamical activity along a trajectory. From the short-time behavior of the Lee-Yang zeros, we predict the large-deviation statistics of the activity which is typically difficult to measure. Our method paves the way for further experiments on the statistical mechanics of many-body systems out of equilibrium.

  11. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  12. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  13. Bell - Kochen - Specker theorem for any finite dimension ?

    NASA Astrophysics Data System (ADS)

    Cabello, Adán; García-Alcaine, Guillermo

    1996-03-01

    The Bell - Kochen - Specker theorem against non-contextual hidden variables can be proved by constructing a finite set of `totally non-colourable' directions, as Kochen and Specker did in a Hilbert space of dimension n = 3. We generalize Kochen and Specker's set to Hilbert spaces of any finite dimension 0305-4470/29/5/016/img2, in a three-step process that shows the relationship between different kinds of proofs (`continuum', `probabilistic', `state-specific' and `state-independent') of the Bell - Kochen - Specker theorem. At the same time, this construction of a totally non-colourable set of directions in any dimension explicitly solves the question raised by Zimba and Penrose about the existence of such a set for n = 5.

  14. Principles of Considering the Effect of the Limited Volume of a System on Its Thermodynamic State

    NASA Astrophysics Data System (ADS)

    Tovbin, Yu. K.

    2018-01-01

    The features of a system with a finite volume that affect its thermodynamic state are considered in comparison to describing small bodies in macroscopic phases. Equations for unary and pair distribution functions are obtained using difference derivatives of a discrete statistical sum. The structure of the equation for the free energy of a system consisting of an ensemble of volume-limited regions with different sizes and a full set of equations describing a macroscopic polydisperse system are discussed. It is found that the equations can be applied to molecular adsorption on small faces of microcrystals, to bound and isolated pores of a polydisperse material, and to describe the spinodal decomposition of a fluid in brief periods of time and high supersaturations of the bulk phase when each local region functions the same on average. It is shown that as the size of a system diminishes, corrections must be introduced for the finiteness of the system volume and fluctuations of the unary and pair distribution functions.

  15. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  16. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  17. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  18. Statistical dynamo theory: Mode excitation.

    PubMed

    Hoyng, P

    2009-04-01

    We compute statistical properties of the lowest-order multipole coefficients of the magnetic field generated by a dynamo of arbitrary shape. To this end we expand the field in a complete biorthogonal set of base functions, viz. B= summation operator_{k}a;{k}(t)b;{k}(r) . The properties of these biorthogonal function sets are treated in detail. We consider a linear problem and the statistical properties of the fluid flow are supposed to be given. The turbulent convection may have an arbitrary distribution of spatial scales. The time evolution of the expansion coefficients a;{k} is governed by a stochastic differential equation from which we infer their averages a;{k} , autocorrelation functions a;{k}(t)a;{k *}(t+tau) , and an equation for the cross correlations a;{k}a;{l *} . The eigenfunctions of the dynamo equation (with eigenvalues lambda_{k} ) turn out to be a preferred set in terms of which our results assume their simplest form. The magnetic field of the dynamo is shown to consist of transiently excited eigenmodes whose frequency and coherence time is given by Ilambda_{k} and -1/Rlambda_{k} , respectively. The relative rms excitation level of the eigenmodes, and hence the distribution of magnetic energy over spatial scales, is determined by linear theory. An expression is derived for |a;{k}|;{2}/|a;{0}|;{2} in case the fundamental mode b;{0} has a dominant amplitude, and we outline how this expression may be evaluated. It is estimated that |a;{k}|;{2}/|a;{0}|;{2} approximately 1/N , where N is the number of convective cells in the dynamo. We show that the old problem of a short correlation time (or first-order smoothing approximation) has been partially eliminated. Finally we prove that for a simple statistically steady dynamo with finite resistivity all eigenvalues obey Rlambda_{k}<0 .

  19. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  20. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  1. Universality in chaos: Lyapunov spectrum and random matrix theory.

    PubMed

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  2. Universality in chaos: Lyapunov spectrum and random matrix theory

    NASA Astrophysics Data System (ADS)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  3. Local Structure Theory for Cellular Automata.

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard Andrew

    The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.

  4. A Statistics-Based Cracking Criterion of Resin-Bonded Silica Sand for Casting Process Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Huimin; Lu, Yan; Ripplinger, Keith; Detwiler, Duane; Luo, Alan A.

    2017-02-01

    Cracking of sand molds/cores can result in many casting defects such as veining. A robust cracking criterion is needed in casting process simulation for predicting/controlling such defects. A cracking probability map, relating to fracture stress and effective volume, was proposed for resin-bonded silica sand based on Weibull statistics. Three-point bending test results of sand samples were used to generate the cracking map and set up a safety line for cracking criterion. Tensile test results confirmed the accuracy of the safety line for cracking prediction. A laboratory casting experiment was designed and carried out to predict cracking of a cup mold during aluminum casting. The stress-strain behavior and the effective volume of the cup molds were calculated using a finite element analysis code ProCAST®. Furthermore, an energy dispersive spectroscopy fractographic examination of the sand samples confirmed the binder cracking in resin-bonded silica sand.

  5. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  6. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  7. Chaotic and regular instantons in helical shell models of turbulence

    NASA Astrophysics Data System (ADS)

    De Pietro, Massimo; Mailybaev, Alexei A.; Biferale, Luca

    2017-03-01

    Shell models of turbulence have a finite-time blowup in the inviscid limit, i.e., the enstrophy diverges while the single-shell velocities stay finite. The signature of this blowup is represented by self-similar instantonic structures traveling coherently through the inertial range. These solutions might influence the energy transfer and the anomalous scaling properties empirically observed for the forced and viscous models. In this paper we present a study of the instantonic solutions for a set of four shell models of turbulence based on the exact decomposition of the Navier-Stokes equations in helical eigenstates. We find that depending on the helical structure of each model, instantons are chaotic or regular. Some instantonic solutions tend to recover mirror symmetry for scales small enough. Models that have anomalous scaling develop regular nonchaotic instantons. Conversely, models that have nonanomalous scaling in the stationary regime are those that have chaotic instantons. The direction of the energy carried by each single instanton tends to coincide with the direction of the energy cascade in the stationary regime. Finally, we find that whenever the small-scale stationary statistics is intermittent, the instanton is less steep than the dimensional Kolmogorov scaling, independently of whether or not it is chaotic. Our findings further support the idea that instantons might be crucial to describe some aspects of the multiscale anomalous statistics of shell models.

  8. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  9. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  10. Solution of a tridiagonal system of equations on the finite element machine

    NASA Technical Reports Server (NTRS)

    Bostic, S. W.

    1984-01-01

    Two parallel algorithms for the solution of tridiagonal systems of equations were implemented on the Finite Element Machine. The Accelerated Parallel Gauss method, an iterative method, and the Buneman algorithm, a direct method, are discussed and execution statistics are presented.

  11. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  12. Bagging Voronoi classifiers for clustering spatial functional data

    NASA Astrophysics Data System (ADS)

    Secchi, Piercesare; Vantini, Simone; Vitelli, Valeria

    2013-06-01

    We propose a bagging strategy based on random Voronoi tessellations for the exploration of geo-referenced functional data, suitable for different purposes (e.g., classification, regression, dimensional reduction, …). Urged by an application to environmental data contained in the Surface Solar Energy database, we focus in particular on the problem of clustering functional data indexed by the sites of a spatial finite lattice. We thus illustrate our strategy by implementing a specific algorithm whose rationale is to (i) replace the original data set with a reduced one, composed by local representatives of neighborhoods covering the entire investigated area; (ii) analyze the local representatives; (iii) repeat the previous analysis many times for different reduced data sets associated to randomly generated different sets of neighborhoods, thus obtaining many different weak formulations of the analysis; (iv) finally, bag together the weak analyses to obtain a conclusive strong analysis. Through an extensive simulation study, we show that this new procedure - which does not require an explicit model for spatial dependence - is statistically and computationally efficient.

  13. A Riemann-Hilbert formulation for the finite temperature Hubbard model

    NASA Astrophysics Data System (ADS)

    Cavaglià, Andrea; Cornagliotto, Martina; Mattelliano, Massimo; Tateo, Roberto

    2015-06-01

    Inspired by recent results in the context of AdS/CFT integrability, we reconsider the Thermodynamic Bethe Ansatz equations describing the 1D fermionic Hubbard model at finite temperature. We prove that the infinite set of TBA equations are equivalent to a simple nonlinear Riemann-Hilbert problem for a finite number of unknown functions. The latter can be transformed into a set of three coupled nonlinear integral equations defined over a finite support, which can be easily solved numerically. We discuss the emergence of an exact Bethe Ansatz and the link between the TBA approach and the results by Jüttner, Klümper and Suzuki based on the Quantum Transfer Matrix method. We also comment on the analytic continuation mechanism leading to excited states and on the mirror equations describing the finite-size Hubbard model with twisted boundary conditions.

  14. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  15. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, correspondingmore » to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.« less

  16. A mapping from the unitary to doubly stochastic matrices and symbols on a finite set

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2008-11-01

    We prove that the mapping from the unitary to doubly stochastic matrices that maps a unitary matrix (ukl) to the doubly stochastic matrix (|ukl|2) is a submersion at a generic unitary matrix. The proof uses the framework of operator symbols on a finite set.

  17. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  18. Statistics of dislocation pinning at localized obstacles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dutta, A.; Bhattacharya, M., E-mail: mishreyee@vecc.gov.in; Barat, P.

    2014-10-14

    Pinning of dislocations at nanosized obstacles like precipitates, voids, and bubbles is a crucial mechanism in the context of phenomena like hardening and creep. The interaction between such an obstacle and a dislocation is often studied at fundamental level by means of analytical tools, atomistic simulations, and finite element methods. Nevertheless, the information extracted from such studies cannot be utilized to its maximum extent on account of insufficient information about the underlying statistics of this process comprising a large number of dislocations and obstacles in a system. Here, we propose a new statistical approach, where the statistics of pinning ofmore » dislocations by idealized spherical obstacles is explored by taking into account the generalized size-distribution of the obstacles along with the dislocation density within a three-dimensional framework. Starting with a minimal set of material parameters, the framework employs the method of geometrical statistics with a few simple assumptions compatible with the real physical scenario. The application of this approach, in combination with the knowledge of fundamental dislocation-obstacle interactions, has successfully been demonstrated for dislocation pinning at nanovoids in neutron irradiated type 316-stainless steel in regard to the non-conservative motion of dislocations. An interesting phenomenon of transition from rare pinning to multiple pinning regimes with increasing irradiation temperature is revealed.« less

  19. Gyrokinetic statistical absolute equilibrium and turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu Jianzhou; Hammett, Gregory W.

    2010-12-15

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: a finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N+1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperaturemore » states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.« less

  20. Extreme value laws for fractal intensity functions in dynamical systems: Minkowski analysis

    NASA Astrophysics Data System (ADS)

    Mantica, Giorgio; Perotti, Luca

    2016-09-01

    Typically, in the dynamical theory of extremal events, the function that gauges the intensity of a phenomenon is assumed to be convex and maximal, or singular, at a single, or at most a finite collection of points in phase-space. In this paper we generalize this situation to fractal landscapes, i.e. intensity functions characterized by an uncountable set of singularities, located on a Cantor set. This reveals the dynamical rôle of classical quantities like the Minkowski dimension and content, whose definition we extend to account for singular continuous invariant measures. We also introduce the concept of extremely rare event, quantified by non-standard Minkowski constants and we study its consequences to extreme value statistics. Limit laws are derived from formal calculations and are verified by numerical experiments. Dedicated to the memory of Joseph Ford, on the twentieth anniversary of his departure.

  1. A histogram-free multicanonical Monte Carlo algorithm for the construction of analytical density of states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai

    We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less

  2. SPAR data set contents. [finite element structural analysis system

    NASA Technical Reports Server (NTRS)

    Cunningham, S. W.

    1981-01-01

    The contents of the stored data sets of the SPAR (space processing applications rocket) finite element structural analysis system are documented. The data generated by each of the system's processors are stored in a data file organized as a library. Each data set, containing a two-dimensional table or matrix, is identified by a four-word name listed in a table of contents. The creating SPAR processor, number of rows and columns, and definitions of each of the data items are listed for each data set. An example SPAR problem using these data sets is also presented.

  3. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    NASA Astrophysics Data System (ADS)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  5. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems.

    PubMed

    Junge, Oliver; Kevrekidis, Ioannis G

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  6. An analytic treatment of gravitational microlensing for sources of finite size at large optical depths

    NASA Technical Reports Server (NTRS)

    Deguchi, Shuji; Watson, William D.

    1988-01-01

    Statistical methods are developed for gravitational lensing in order to obtain analytic expressions for the average surface brightness that include the effects of microlensing by stellar (or other compact) masses within the lensing galaxy. The primary advance here is in utilizing a Markoff technique to obtain expressions that are valid for sources of finite size when the surface density of mass in the lensing galaxy is large. The finite size of the source is probably the key consideration for the occurrence of microlensing by individual stars. For the intensity from a particular location, the parameter which governs the importance of microlensing is determined. Statistical methods are also formulated to assess the time variation of the surface brightness due to the random motion of the masses that cause the microlensing.

  7. Divergence of activity expansions: Is it actually a problem?

    NASA Astrophysics Data System (ADS)

    Ushcats, M. V.; Bulavin, L. A.; Sysoev, V. M.; Ushcats, S. Yu.

    2017-12-01

    For realistic interaction models, which include both molecular attraction and repulsion (e.g., Lennard-Jones, modified Lennard-Jones, Morse, and square-well potentials), the asymptotic behavior of the virial expansions for pressure and density in powers of activity has been studied taking power terms of high orders into account on the basis of the known finite-order irreducible integrals as well as the recent approximations of infinite irreducible series. Even in the divergence region (at subcritical temperatures), this behavior stays thermodynamically adequate (in contrast to the behavior of the virial equation of state with the same set of irreducible integrals) and corresponds to the beginning of the first-order phase transition: the divergence yields the jump (discontinuity) in density at constant pressure and chemical potential. In general, it provides a statistical explanation of the condensation phenomenon, but for liquid or solid states, the physically proper description (which can turn the infinite discontinuity into a finite jump of density) still needs further study of high-order cluster integrals and, especially, their real dependence on the system volume (density).

  8. Orthogonality preserving infinite dimensional quadratic stochastic operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akın, Hasan; Mukhamedov, Farrukh

    In the present paper, we consider a notion of orthogonal preserving nonlinear operators. We introduce π-Volterra quadratic operators finite and infinite dimensional settings. It is proved that any orthogonal preserving quadratic operator on finite dimensional simplex is π-Volterra quadratic operator. In infinite dimensional setting, we describe all π-Volterra operators in terms orthogonal preserving operators.

  9. Finite Set Control Transcription for Optimal Control Applications

    DTIC Science & Technology

    2009-05-01

    Figures 1.1 The Parameters of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 Categories of Optimization Algorithms ...Programming (NLP) algorithm , such as SNOPT2 (hereafter, called the optimizer). The Finite Set Control Transcription (FSCT) method is essentially a...artificial neural networks, ge- netic algorithms , or combinations thereof for analysis.4,5 Indeed, an actual biological neural network is an example of

  10. On the Use of a Mixed Gaussian/Finite-Element Basis Set for the Calculation of Rydberg States

    NASA Technical Reports Server (NTRS)

    Thuemmel, Helmar T.; Langhoff, Stephen (Technical Monitor)

    1996-01-01

    Configuration-interaction studies are reported for the Rydberg states of the helium atom using mixed Gaussian/finite-element (GTO/FE) one particle basis sets. Standard Gaussian valence basis sets are employed, like those, used extensively in quantum chemistry calculations. It is shown that the term values for high-lying Rydberg states of the helium atom can be obtained accurately (within 1 cm -1), even for a small GTO set, by augmenting the n-particle space with configurations, where orthonormalized interpolation polynomials are singly occupied.

  11. Finite Topological Spaces as a Pedagogical Tool

    ERIC Educational Resources Information Center

    Helmstutler, Randall D.; Higginbottom, Ryan S.

    2012-01-01

    We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…

  12. Computer-Oriented Calculus Courses Using Finite Differences.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    The so-called discrete approach in calculus instruction involves introducing topics from the calculus of finite differences and finite sums, both for motivation and as useful tools for applications of the calculus. In particular, it provides an ideal setting in which to incorporate computers into calculus courses. This approach has been…

  13. Second-order asymptotics for quantum hypothesis testing in settings beyond i.i.d. - quantum lattice systems and more

    NASA Astrophysics Data System (ADS)

    Datta, Nilanjana; Pautrat, Yan; Rouzé, Cambyse

    2016-06-01

    Quantum Stein's lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ⊗n or σ⊗n) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability αn of erroneously inferring the state to be σ, the probability βn of erroneously inferring the state to be ρ decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.

  14. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    NASA Astrophysics Data System (ADS)

    Karamehmedović, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-06-01

    We consider the multi-frequency inverse source problem for the scalar Helmholtz equation in the plane. The goal is to reconstruct the source term in the equation from measurements of the solution on a surface outside the support of the source. We study the problem in a certain finite dimensional setting: from measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier–Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction, and under an additional, mild assumption, the reconstruction method is shown to be stable. Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method is implemented numerically and our theoretical findings are supported by numerical experiments.

  15. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  16. Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane

    NASA Astrophysics Data System (ADS)

    He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.

    2016-05-01

    The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.

  17. The Effects of Finite Sampling on State Assessment Sample Requirements. NAEP Validity Studies. Working Paper Series.

    ERIC Educational Resources Information Center

    Chromy, James R.

    This study addressed statistical techniques that might ameliorate some of the sampling problems currently facing states with small populations participating in State National Assessment of Educational Progress (NAEP) assessments. The study explored how the application of finite population correction factors to the between-school component of…

  18. Measures with locally finite support and spectrum.

    PubMed

    Meyer, Yves F

    2016-03-22

    The goal of this paper is the construction of measures μ on R(n)enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ f μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order.

  19. Measures with locally finite support and spectrum

    PubMed Central

    Meyer, Yves F.

    2016-01-01

    The goal of this paper is the construction of measures μ on Rn enjoying three conflicting but fortunately compatible properties: (i) μ is a sum of weighted Dirac masses on a locally finite set, (ii) the Fourier transform μ^ of μ is also a sum of weighted Dirac masses on a locally finite set, and (iii) μ is not a generalized Dirac comb. We give surprisingly simple examples of such measures. These unexpected patterns strongly differ from quasicrystals, they provide us with unusual Poisson's formulas, and they might give us an unconventional insight into aperiodic order. PMID:26929358

  20. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  1. SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.

    1994-01-01

    The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.

  2. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    ERIC Educational Resources Information Center

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  3. Shock and Vibration Symposium (59th) Held in Albuquerque, New Mexico on 18-20 October 1988. Volume 1

    DTIC Science & Technology

    1988-10-01

    Partial contents: The Quest for Omega = sq root(K/M) -- Notes on the development of vibration analysis; An overview of Statistical Energy analysis ; Its...and inplane vibration transmission in statistical energy analysis ; Vibroacoustic response using the finite element method and statistical energy analysis ; Helium

  4. Quantum mechanics over sets

    NASA Astrophysics Data System (ADS)

    Ellerman, David

    2014-03-01

    In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.

  5. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  6. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    PubMed

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  7. Maximum one-shot dissipated work from Rényi divergences

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  8. Maximum one-shot dissipated work from Rényi divergences.

    PubMed

    Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  9. Multi-Agent Inference in Social Networks: A Finite Population Learning Approach.

    PubMed

    Fan, Jianqing; Tong, Xin; Zeng, Yao

    When people in a society want to make inference about some parameter, each person may want to use data collected by other people. Information (data) exchange in social networks is usually costly, so to make reliable statistical decisions, people need to trade off the benefits and costs of information acquisition. Conflicts of interests and coordination problems will arise in the process. Classical statistics does not consider people's incentives and interactions in the data collection process. To address this imperfection, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Motivated by our interest in aggregate inference at the societal level, we propose a new concept, finite population learning , to address whether with high probability, a large fraction of people in a given finite population network can make "good" inference. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows.

  10. A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  11. Wave chaos in a randomly inhomogeneous waveguide: spectral analysis of the finite-range evolution operator.

    PubMed

    Makarov, D V; Kon'kov, L E; Uleysky, M Yu; Petrov, P S

    2013-01-01

    The problem of sound propagation in a randomly inhomogeneous oceanic waveguide is considered. An underwater sound channel in the Sea of Japan is taken as an example. Our attention is concentrated on the domains of finite-range ray stability in phase space and their influence on wave dynamics. These domains can be found by means of the one-step Poincare map. To study manifestations of finite-range ray stability, we introduce the finite-range evolution operator (FREO) describing transformation of a wave field in the course of propagation along a finite segment of a waveguide. Carrying out statistical analysis of the FREO spectrum, we estimate the contribution of regular domains and explore their evanescence with increasing length of the segment. We utilize several methods of spectral analysis: analysis of eigenfunctions by expanding them over modes of the unperturbed waveguide, approximation of level-spacing statistics by means of the Berry-Robnik distribution, and the procedure used by A. Relano and coworkers [Relano et al., Phys. Rev. Lett. 89, 244102 (2002); Relano, Phys. Rev. Lett. 100, 224101 (2008)]. Comparing the results obtained with different methods, we find that the method based on the statistical analysis of FREO eigenfunctions is the most favorable for estimating the contribution of regular domains. It allows one to find directly the waveguide modes whose refraction is regular despite the random inhomogeneity. For example, it is found that near-axial sound propagation in the Sea of Japan preserves stability even over distances of hundreds of kilometers due to the presence of a shearless torus in the classical phase space. Increasing the acoustic wavelength degrades scattering, resulting in recovery of eigenfunction localization near periodic orbits of the one-step Poincaré map.

  12. Hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis method for mid-frequency analysis of built-up systems with epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan

    2017-09-01

    Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.

  13. Efficiency and formalism of quantum games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.F.; Johnson, Neil F.

    We show that quantum games are more efficient than classical games and provide a saturated upper bound for this efficiency. We also demonstrate that the set of finite classical games is a strict subset of the set of finite quantum games. Our analysis is based on a rigorous formulation of quantum games, from which quantum versions of the minimax theorem and the Nash equilibrium theorem can be deduced.

  14. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  15. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  16. Vibration Transmission through Rolling Element Bearings in Geared Rotor Systems

    DTIC Science & Technology

    1990-11-01

    147 4.8 Concluding Remarks ........................................................... 153 V STATISTICAL ENERGY ANALYSIS ............................................ 155...and dynamic finite element techniques are used to develop the discrete vibration models while statistical energy analysis method is used for the broad...bearing system studies, geared rotor system studies, and statistical energy analysis . Each chapter is self sufficient since it is written in a

  17. Statistics based sampling for controller and estimator design

    NASA Astrophysics Data System (ADS)

    Tenne, Dirk

    The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.

  18. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  19. Hierarchy of Certain Types of DNA Splicing Systems

    NASA Astrophysics Data System (ADS)

    Yusof, Yuhani; Sarmin, Nor Haniza; Goode, T. Elizabeth; Mahmud, Mazri; Heng, Fong Wan

    A Head splicing system (H-system)consists of a finite set of strings (words) written over a finite alphabet, along with a finite set of rules that acts on the strings by iterated cutting and pasting to create a splicing language. Any interpretation that is aligned with Tom Head's original idea is one in which the strings represent double-stranded deoxyribonucleic acid (dsDNA) and the rules represent the cutting and pasting action of restriction enzymes and ligase, respectively. A new way of writing the rule sets is adopted so as to make the biological interpretation transparent. This approach is used in a formal language- theoretic analysis of the hierarchy of certain classes of splicing systems, namely simple, semi-simple and semi-null splicing systems. The relations between such systems and their associated languages are given as theorems, corollaries and counterexamples.

  20. All the noncontextuality inequalities for arbitrary prepare-and-measure experiments with respect to any fixed set of operational equivalences

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.; Wolfe, Elie

    2018-06-01

    Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.

  1. Effect of higher order nonlinearity, directionality and finite water depth on wave statistics: Comparison of field data and numerical simulations

    NASA Astrophysics Data System (ADS)

    Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro

    2014-05-01

    This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if kh<1.36. In this regard, the aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.

  2. Statistics of Gaussian packets on metric and decorated graphs.

    PubMed

    Chernyshev, V L; Shafarevich, A I

    2014-01-28

    We study a semiclassical asymptotics of the Cauchy problem for a time-dependent Schrödinger equation on metric and decorated graphs with a localized initial function. A decorated graph is a topological space obtained from a graph via replacing vertices with smooth Riemannian manifolds. The main term of an asymptotic solution at an arbitrary finite time is a sum of Gaussian packets and generalized Gaussian packets (localized near a certain set of codimension one). We study the number of packets as time tends to infinity. We prove that under certain assumptions this number grows in time as a polynomial and packets fill the graph uniformly. We discuss a simple example of the opposite situation: in this case, a numerical experiment shows a subexponential growth.

  3. Probabilistic objective functions for sensor management

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald P. S.; Zajic, Tim R.

    2004-08-01

    This paper continues the investigation of a foundational and yet potentially practical basis for control-theoretic sensor management, using a comprehensive, intuitive, system-level Bayesian paradigm based on finite-set statistics (FISST). In this paper we report our most recent progress, focusing on multistep look-ahead -- i.e., allocation of sensor resources throughout an entire future time-window. We determine future sensor states in the time-window using a "probabilistically natural" sensor management objective function, the posterior expected number of targets (PENT). This objective function is constructed using a new "maxi-PIMS" optimization strategy that hedges against unknowable future observation-collections. PENT is used in conjuction with approximate multitarget filters: the probability hypothesis density (PHD) filter or the multi-hypothesis correlator (MHC) filter.

  4. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  5. Numerical Analysis of Solids at Failure

    DTIC Science & Technology

    2011-08-20

    failure analyses include the formulation of invariant finite elements for thin Kirchhoff rods, and preliminary initial studies of growth in...analysis of the failure of other structural/mechanical systems, including the finite element modeling of thin Kirchhoff rods and the constitutive...algorithm based on the connectivity graph of the underlying finite element mesh. In this setting, the discontinuities are defined by fronts propagating

  6. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  7. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  8. FADTTS: functional analysis of diffusion tensor tract statistics.

    PubMed

    Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H

    2011-06-01

    The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Sanov and central limit theorems for output statistics of quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk

    2015-02-15

    In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less

  10. Statistical learning and the challenge of syntax: Beyond finite state automata

    NASA Astrophysics Data System (ADS)

    Elman, Jeff

    2003-10-01

    Over the past decade, it has been clear that even very young infants are sensitive to the statistical structure of language input presented to them, and use the distributional regularities to induce simple grammars. But can such statistically-driven learning also explain the acquisition of more complex grammar, particularly when the grammar includes recursion? Recent claims (e.g., Hauser, Chomsky, and Fitch, 2002) have suggested that the answer is no, and that at least recursion must be an innate capacity of the human language acquisition device. In this talk evidence will be presented that indicates that, in fact, statistically-driven learning (embodied in recurrent neural networks) can indeed enable the learning of complex grammatical patterns, including those that involve recursion. When the results are generalized to idealized machines, it is found that the networks are at least equivalent to Push Down Automata. Perhaps more interestingly, with limited and finite resources (such as are presumed to exist in the human brain) these systems demonstrate patterns of performance that resemble those in humans.

  11. Finite-size scaling of survival probability in branching processes

    NASA Astrophysics Data System (ADS)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  12. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources.

    PubMed

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-10-16

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation.

  13. The Shock and Vibration Digest. Volume 16, Number 1

    DTIC Science & Technology

    1984-01-01

    investigation of the measure- ment of frequency band average loss factors of structural components for use in the statistical energy analysis method of...stiffness. Matrix methods Key Words: Finite element technique. Statistical energy analysis . Experimental techniques. Framed structures, Com- puter...programs In order to further understand the practical application of the statistical energy analysis , a two section plate-like frame structure is

  14. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  15. Second-order asymptotics for quantum hypothesis testing in settings beyond i.i.d.—quantum lattice systems and more

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Datta, Nilanjana; Rouzé, Cambyse; Pautrat, Yan

    2016-06-15

    Quantum Stein’s lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ{sup ⊗n} or σ{sup ⊗n}) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability α{sub n} of erroneously inferring the state to be σ, the probability β{sub n} of erroneously inferring the state to be ρmore » decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.« less

  16. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  17. Two-Dimensional Model for Reactive-Sorption Columns of Cylindrical Geometry: Analytical Solutions and Moment Analysis.

    PubMed

    Khan, Farman U; Qamar, Shamsul

    2017-05-01

    A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Singularity computations. [finite element methods for elastoplastic flow

    NASA Technical Reports Server (NTRS)

    Swedlow, J. L.

    1978-01-01

    Direct descriptions of the structure of a singularity would describe the radial and angular distributions of the field quantities as explicitly as practicable along with some measure of the intensity of the singularity. This paper discusses such an approach based on recent development of numerical methods for elastoplastic flow. Attention is restricted to problems where one variable or set of variables is finite at the origin of the singularity but a second set is not.

  19. Rectifiability of Line Defects in Liquid Crystals with Variable Degree of Orientation

    NASA Astrophysics Data System (ADS)

    Alper, Onur

    2018-04-01

    In [2], H ardt, L in and the author proved that the defect set of minimizers of the modified Ericksen energy for nematic liquid crystals consists locally of a finite union of isolated points and Hölder continuous curves with finitely many crossings. In this article, we show that each Hölder continuous curve in the defect set is of finite length. Hence, locally, the defect set is rectifiable. For the most part, the proof closely follows the work of D e L ellis et al. (Rectifiability and upper minkowski bounds for singularities of harmonic q-valued maps, arXiv:1612.01813, 2016) on harmonic Q-valued maps. The blow-up analysis in A lper et al. (Calc Var Partial Differ Equ 56(5):128, 2017) allows us to simplify the covering arguments in [11] and locally estimate the length of line defects in a geometric fashion.

  20. Double asymptotics for the chi-square statistic.

    PubMed

    Rempała, Grzegorz A; Wesołowski, Jacek

    2016-12-01

    Consider distributional limit of the Pearson chi-square statistic when the number of classes m n increases with the sample size n and [Formula: see text]. Under mild moment conditions, the limit is Gaussian for λ = ∞, Poisson for finite λ > 0, and degenerate for λ = 0.

  1. Bengali-English Relevant Cross Lingual Information Access Using Finite Automata

    NASA Astrophysics Data System (ADS)

    Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi

    2010-10-01

    CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.

  2. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  3. Multi-Agent Inference in Social Networks: A Finite Population Learning Approach

    PubMed Central

    Tong, Xin; Zeng, Yao

    2016-01-01

    When people in a society want to make inference about some parameter, each person may want to use data collected by other people. Information (data) exchange in social networks is usually costly, so to make reliable statistical decisions, people need to trade off the benefits and costs of information acquisition. Conflicts of interests and coordination problems will arise in the process. Classical statistics does not consider people’s incentives and interactions in the data collection process. To address this imperfection, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Motivated by our interest in aggregate inference at the societal level, we propose a new concept, finite population learning, to address whether with high probability, a large fraction of people in a given finite population network can make “good” inference. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows. PMID:27076691

  4. Symmetry and Degeneracy in Quantum Mechanics. Self-Duality in Finite Spin Systems

    ERIC Educational Resources Information Center

    Osacar, C.; Pacheco, A. F.

    2009-01-01

    The symmetry of self-duality (Savit 1980 "Rev. Mod. Phys. 52" 453) of some models of statistical mechanics and quantum field theory is discussed for finite spin blocks of the Ising chain in a transverse magnetic field. The existence of this symmetry in a specific type of these blocks, and not in others, is manifest by the degeneracy of their…

  5. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  6. Probabilistic finite elements for fatigue and fracture analysis

    NASA Astrophysics Data System (ADS)

    Belytschko, Ted; Liu, Wing Kam

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  7. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1992-01-01

    Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.

  8. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  9. Multisource passive acoustic tracking: an application of random finite set data fusion

    NASA Astrophysics Data System (ADS)

    Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung

    2010-04-01

    Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.

  10. Experimental Quiet Sprocket Design and Noise Reduction in Tracked Vehicles

    DTIC Science & Technology

    1981-04-01

    Track and Suspension Noise Reduction Statistical Energy Analysis Mechanical Impedance Measurement Finite Element Modal Analysis\\Noise Sources 2...shape and idler attachment are different. These differen- ces were investigated using the concepts of statistical energy analysis for hull generated noise...element r,’calculated from Statistical Energy Analysis . Such an approach will be valid within reasonable limits for frequencies of about 200 Hz and

  11. Compendium of Methods for Applying Measured Data to Vibration and Acoustic Problems

    DTIC Science & Technology

    1985-10-01

    statistical energy analysis , finite element models, transfer function...Procedures for the Modal Analysis Method .............................................. 8-22 8.4 Summary of the Procedures for the Statistical Energy Analysis Method... statistical energy analysis . 8-1 • o + . . i... "_+,A" L + "+..• •+A ’! i, + +.+ +• o.+ -ore -+. • -..- , .%..% ". • 2 -".-2- ;.-.’, . o . It is helpful

  12. Effects of Verb Familiarity on Finiteness Marking in Children With Specific Language Impairment

    PubMed Central

    Rice, Mabel L.; Bontempo, Daniel E.

    2015-01-01

    Purpose Children with specific language impairment (SLI) have known deficits in the verb lexicon and finiteness marking. This study investigated a potential relationship between these 2 variables in children with SLI and 2 control groups considering predictions from 2 different theoretical perspectives, morphosyntactic versus morphophonological. Method Children with SLI, age-equivalent, and language-equivalent (LE) control children (n = 59) completed an experimental sentence imitation task that generated estimates of children's finiteness accuracy under 2 levels of verb familiarity—familiar real verbs versus unfamiliar real verbs—in clausal sites marked for finiteness. Imitations were coded and analyzed for overall accuracy as well as finiteness marking and verb root imitation accuracy. Results Statistical comparisons revealed that children with SLI did not differ from LE children and were less accurate than age-equivalent children on all dependent variables: overall imitation, finiteness marking imitation, and verb root imitation accuracy. A significant Group × Condition interaction for finiteness marking revealed lower levels of accuracy on unfamiliar verbs for the SLI and LE groups only. Conclusions Findings indicate a relationship between verb familiarity and finiteness marking in children with SLI and younger controls and help clarify the roles of morphosyntax, verb lexicon, and morphophonology. PMID:25611349

  13. Repeated Random Sampling in Year 5

    ERIC Educational Resources Information Center

    Watson, Jane M.; English, Lyn D.

    2016-01-01

    As an extension to an activity introducing Year 5 students to the practice of statistics, the software "TinkerPlots" made it possible to collect repeated random samples from a finite population to informally explore students' capacity to begin reasoning with a distribution of sample statistics. This article provides background for the…

  14. On the validation of seismic imaging methods: Finite frequency or ray theory?

    DOE PAGES

    Maceira, Monica; Larmat, Carene; Porritt, Robert W.; ...

    2015-01-23

    We investigate the merits of the more recently developed finite-frequency approach to tomography against the more traditional and approximate ray theoretical approach for state of the art seismic models developed for western North America. To this end, we employ the spectral element method to assess the agreement between observations on real data and measurements made on synthetic seismograms predicted by the models under consideration. We check for phase delay agreement as well as waveform cross-correlation values. Based on statistical analyses on S wave phase delay measurements, finite frequency shows an improvement over ray theory. Random sampling using cross-correlation values identifiesmore » regions where synthetic seismograms computed with ray theory and finite-frequency models differ the most. Our study suggests that finite-frequency approaches to seismic imaging exhibit measurable improvement for pronounced low-velocity anomalies such as mantle plumes.« less

  15. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources

    PubMed Central

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-01-01

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation. PMID:26471947

  16. From Large Deviations to Semidistances of Transport and Mixing: Coherence Analysis for Finite Lagrangian Data

    NASA Astrophysics Data System (ADS)

    Koltai, Péter; Renger, D. R. Michiel

    2018-06-01

    One way to analyze complicated non-autonomous flows is through trying to understand their transport behavior. In a quantitative, set-oriented approach to transport and mixing, finite time coherent sets play an important role. These are time-parametrized families of sets with unlikely transport to and from their surroundings under small or vanishing random perturbations of the dynamics. Here we propose, as a measure of transport and mixing for purely advective (i.e., deterministic) flows, (semi)distances that arise under vanishing perturbations in the sense of large deviations. Analogously, for given finite Lagrangian trajectory data we derive a discrete-time-and-space semidistance that comes from the "best" approximation of the randomly perturbed process conditioned on this limited information of the deterministic flow. It can be computed as shortest path in a graph with time-dependent weights. Furthermore, we argue that coherent sets are regions of maximal farness in terms of transport and mixing, and hence they occur as extremal regions on a spanning structure of the state space under this semidistance—in fact, under any distance measure arising from the physical notion of transport. Based on this notion, we develop a tool to analyze the state space (or the finite trajectory data at hand) and identify coherent regions. We validate our approach on idealized prototypical examples and well-studied standard cases.

  17. Stabilised finite-element methods for solving the level set equation with mass conservation

    NASA Astrophysics Data System (ADS)

    Kabirou Touré, Mamadou; Fahsi, Adil; Soulaïmani, Azzeddine

    2016-01-01

    Finite-element methods are studied for solving moving interface flow problems using the level set approach and a stabilised variational formulation proposed in Touré and Soulaïmani (2012; Touré and Soulaïmani To appear in 2016), coupled with a level set correction method. The level set correction is intended to enhance the mass conservation satisfaction property. The stabilised variational formulation (Touré and Soulaïmani 2012; Touré and Soulaïmani, To appear in 2016) constrains the level set function to remain close to the signed distance function, while the mass conservation is a correction step which enforces the mass balance. The eXtended finite-element method (XFEM) is used to take into account the discontinuities of the properties within an element. XFEM is applied to solve the Navier-Stokes equations for two-phase flows. The numerical methods are numerically evaluated on several test cases such as time-reversed vortex flow, a rigid-body rotation of Zalesak's disc, sloshing flow in a tank, a dam-break over a bed, and a rising bubble subjected to buoyancy. The numerical results show the importance of satisfying global mass conservation to accurately capture the interface position.

  18. Computer Generated Pictorial Stores Management Displays for Fighter Aircraft.

    DTIC Science & Technology

    1983-05-01

    questionnaire rating-scale data. KRISHNAIAH FINITE INTERSECTION TESTS (FITs) - A set of tests conducted after significant MANOVA results are found to...the Social Sciences (SPSS) (Reference 2). To further examine significant performance differences, the Krishnaiah Finite Intersection Test (FIT), a...New York: McGraw-Hill Book Company, 1975. 3. C. M. Cox, P. R. Krishnaiah , J. C. Lee, J. M. Reising, and F. J. Schuurman, A study on Finite Intersection

  19. Synchronization of Finite State Shared Resources

    DTIC Science & Technology

    1976-03-01

    IMHI uiw mmm " AFOSR -TR- 70- 0^8 3 QC o SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Sei neide.- DEPARTMENT of COMPUTER...34" ■ ■ ^ I I. i. . : ,1 . i-i SYNCHRONIZATION OF FINITE STATE SHARED RESOURCES Edward A Schneider Department of Computer...SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. ABSTRACT The problem of synchronizing a set of operations defined on a shared resource

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vourdas, A.

    The finite set of subsystems of a finite quantum system with variables in Z(n), is studied as a Heyting algebra. The physical meaning of the logical connectives is discussed. It is shown that disjunction of subsystems is more general concept than superposition. Consequently, the quantum probabilities related to commuting projectors in the subsystems, are incompatible with associativity of the join in the Heyting algebra, unless if the variables belong to the same chain. This leads to contextuality, which in the present formalism has as contexts, the chains in the Heyting algebra. Logical Bell inequalities, which contain “Heyting factors,” are discussed.more » The formalism is also applied to the infinite set of all finite quantum systems, which is appropriately enlarged in order to become a complete Heyting algebra.« less

  1. ON THE DYNAMICAL DERIVATION OF EQUILIBRIUM STATISTICAL MECHANICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prigogine, I.; Balescu, R.; Henin, F.

    1960-12-01

    Work on nonequilibrium statistical mechanics, which allows an extension of the kinetic proof to all results of equilibrium statistical mechanics involving a finite number of degrees of freedom, is summarized. As an introduction to the general N-body problem, the scattering theory in classical mechanics is considered. The general N-body problem is considered for the case of classical mechanics, quantum mechanics with Boltzmann statistics, and quantum mechanics including quantum statistics. Six basic diagrams, which describe the elementary processes of the dynamics of correlations, were obtained. (M.C.G.)

  2. Nonlinear damping model for flexible structures. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Zang, Weijian

    1990-01-01

    The study of nonlinear damping problem of flexible structures is addressed. Both passive and active damping, both finite dimensional and infinite dimensional models are studied. In the first part, the spectral density and the correlation function of a single DOF nonlinear damping model is investigated. A formula for the spectral density is established with O(Gamma(sub 2)) accuracy based upon Fokker-Planck technique and perturbation. The spectral density depends upon certain first order statistics which could be obtained if the stationary density is known. A method is proposed to find the approximate stationary density explicitly. In the second part, the spectral density of a multi-DOF nonlinear damping model is investigated. In the third part, energy type nonlinear damping model in an infinite dimensional setting is studied.

  3. Conservative properties of finite difference schemes for incompressible flow

    NASA Technical Reports Server (NTRS)

    Morinishi, Youhei

    1995-01-01

    The purpose of this research is to construct accurate finite difference schemes for incompressible unsteady flow simulations such as LES (large-eddy simulation) or DNS (direct numerical simulation). In this report, conservation properties of the continuity, momentum, and kinetic energy equations for incompressible flow are specified as analytical requirements for a proper set of discretized equations. Existing finite difference schemes in staggered grid systems are checked for satisfaction of the requirements. Proper higher order accurate finite difference schemes in a staggered grid system are then proposed. Plane channel flow is simulated using the proposed fourth order accurate finite difference scheme and the results compared with those of the second order accurate Harlow and Welch algorithm.

  4. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  5. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    NASA Astrophysics Data System (ADS)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)

  6. Renormalizable Electrodynamics of Scalar and Vector Mesons. Part II

    DOE R&D Accomplishments Database

    Salam, Abdus; Delbourgo, Robert

    1964-01-01

    The "gauge" technique" for solving theories introduced in an earlier paper is applied to scalar and vector electrodynamics. It is shown that for scalar electrodynamics, there is no {lambda}φ*2φ2 infinity in the theory, while with conventional subtractions vector electrodynamics is completely finite. The essential ideas of the gauge technique are explained in section 3, and a preliminary set of rules for finite computation in vector electrodynamics is set out in Eqs. (7.28) - (7.34).

  7. The Quantized Geometry of Visual Space: The Coherent Computation of Depth, Form, and Lightness. Revised Version.

    DTIC Science & Technology

    1982-08-01

    of sensitivity with background luminance, and the finitE capacity of visual short term memory are discussed in terms of a small set of ...binocular rivalry, reflectance rivalry, Fechner’s paradox, decrease of threshold contrast with increased number of cycles in a grating pattern, hysteresis...adaptation level tuning, Weber law modulation, shift of sensitivity with background luminance, and the finite capacity of visual

  8. Evaluation of Resuspension from Propeller Wash in DoD Harbors

    DTIC Science & Technology

    2016-09-01

    Environmental Research and Development Center FANS FOV ICP-MS Finite Analytical Navier-Stoker Solver Field of View Inductively Coupled Plasma with...Model (1984) and the Finite Analytical Navier- Stoker Solver (FANS) model (Chen et al., 2003) were set up to simulate and evaluate flow velocities and...model for evaluating the resuspension potential of propeller wash by a tugboat and the FANS model for a DDG. The Finite -Analytic Navier-Stokes (FANS

  9. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  10. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  11. Quantum spectral curve for arbitrary state/operator in AdS5/CFT4

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay; Kazakov, Vladimir; Leurent, Sébastien; Volin, Dmytro

    2015-09-01

    We give a derivation of quantum spectral curve (QSC) — a finite set of Riemann-Hilbert equations for exact spectrum of planar N=4 SYM theory proposed in our recent paper Phys. Rev. Lett. 112 (2014). We also generalize this construction to all local single trace operators of the theory, in contrast to the TBA-like approaches worked out only for a limited class of states. We reveal a rich algebraic and analytic structure of the QSC in terms of a so called Q-system — a finite set of Baxter-like Q-functions. This new point of view on the finite size spectral problem is shown to be completely compatible, though in a far from trivial way, with already known exact equations (analytic Y-system/TBA, or FiNLIE). We use the knowledge of this underlying Q-system to demonstrate how the classical finite gap solutions and the asymptotic Bethe ansatz emerge from our formalism in appropriate limits.

  12. Minimal measures for Euler-Lagrange flows on finite covering spaces

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Xia, Zhihong

    2016-12-01

    In this paper we study the minimal measures for positive definite Lagrangian systems on compact manifolds. We are particularly interested in manifolds with more complicated fundamental groups. Mather’s theory classifies the minimal or action-minimizing measures according to the first (co-)homology group of a given manifold. We extend Mather’s notion of minimal measures to a larger class for compact manifolds with non-commutative fundamental groups, and use finite coverings to study the structure of these extended minimal measures. We also define action-minimizers and minimal measures in the homotopical sense. Our program is to study the structure of homotopical minimal measures by considering Mather’s minimal measures on finite covering spaces. Our goal is to show that, in general, manifolds with a non-commutative fundamental group have a richer set of minimal measures, hence a richer dynamical structure. As an example, we study the geodesic flow on surfaces of higher genus. Indeed, by going to the finite covering spaces, the set of minimal measures is much larger and more interesting.

  13. Definition of NASTRAN sets by use of parametric geometry

    NASA Technical Reports Server (NTRS)

    Baughn, Terry V.; Tiv, Mehran

    1989-01-01

    Many finite element preprocessors describe finite element model geometry with points, lines, surfaces and volumes. One method for describing these basic geometric entities is by use of parametric cubics which are useful for representing complex shapes. The lines, surfaces and volumes may be discretized for follow on finite element analysis. The ability to limit or selectively recover results from the finite element model is extremely important to the analyst. Equally important is the ability to easily apply boundary conditions. Although graphical preprocessors have made these tasks easier, model complexity may not lend itself to easily identify a group of grid points desired for data recovery or application of constraints. A methodology is presented which makes use of the assignment of grid point locations in parametric coordinates. The parametric coordinates provide a convenient ordering of the grid point locations and a method for retrieving the grid point ID's from the parent geometry. The selected grid points may then be used for the generation of the appropriate set and constraint cards.

  14. Optimizing Nanoscale Quantitative Optical Imaging of Subfield Scattering Targets

    PubMed Central

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui; Sohn, Martin; Silver, Richard M.

    2016-01-01

    The full 3-D scattered field above finite sets of features has been shown to contain a continuum of spatial frequency information, and with novel optical microscopy techniques and electromagnetic modeling, deep-subwavelength geometrical parameters can be determined. Similarly, by using simulations, scattering geometries and experimental conditions can be established to tailor scattered fields that yield lower parametric uncertainties while decreasing the number of measurements and the area of such finite sets of features. Such optimized conditions are reported through quantitative optical imaging in 193 nm scatterfield microscopy using feature sets up to four times smaller in area than state-of-the-art critical dimension targets. PMID:27805660

  15. The Chiral Separation Effect in quenched finite-density QCD

    NASA Astrophysics Data System (ADS)

    Puhr, Matthias; Buividovich, Pavel

    2018-03-01

    We present results of a study of the Chiral Separation Effect (CSE) in quenched finite-density QCD. Using a recently developed numerical method we calculate the conserved axial current for exactly chiral overlap fermions at finite density for the first time. We compute the anomalous transport coeffcient for the CSE in the confining and deconfining phase and investigate possible deviations from the universal value. In both phases we find that non-perturbative corrections to the CSE are absent and we reproduce the universal value for the transport coeffcient within small statistical errors. Our results suggest that the CSE can be used to determine the renormalisation factor of the axial current.

  16. Application of the Extreme Value Distribution to Estimate the Uncertainty of Peak Sound Pressure Levels at the Workplace.

    PubMed

    Lenzuni, Paolo

    2015-07-01

    The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  17. A Matter of Classes: Stratifying Health Care Populations to Produce Better Estimates of Inpatient Costs

    PubMed Central

    Rein, David B

    2005-01-01

    Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501

  18. Ancestral Relationships Using Metafounders: Finite Ancestral Populations and Across Population Relationships

    PubMed Central

    Legarra, Andres; Christensen, Ole F.; Vitezica, Zulma G.; Aguilar, Ignacio; Misztal, Ignacy

    2015-01-01

    Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a “metafounder,” a pseudo-individual included as founder of the pedigree and similar to an “unknown parent group.” Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. PMID:25873631

  19. Emergent kink statistics at finite temperature

    DOE PAGES

    Lopez-Ruiz, Miguel Angel; Yepez-Martinez, Tochtli; Szczepaniak, Adam; ...

    2017-07-25

    In this paper we use 1D quantum mechanical systems with Higgs-like interaction potential to study the emergence of topological objects at finite temperature. Two different model systems are studied, the standard double-well potential model and a newly introduced discrete kink model. Using Monte-Carlo simulations as well as analytic methods, we demonstrate how kinks become abundant at low temperatures. These results may shed useful insights on how topological phenomena may occur in QCD.

  20. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: McDonnell-Douglas Helicopter Company achievements

    NASA Technical Reports Server (NTRS)

    Toossi, Mostafa; Weisenburger, Richard; Hashemi-Kia, Mostafa

    1993-01-01

    This paper presents a summary of some of the work performed by McDonnell Douglas Helicopter Company under NASA Langley-sponsored rotorcraft structural dynamics program known as DAMVIBS (Design Analysis Methods for VIBrationS). A set of guidelines which is applicable to dynamic modeling, analysis, testing, and correlation of both helicopter airframes and a large variety of structural finite element models is presented. Utilization of these guidelines and the key features of their applications to vibration modeling of helicopter airframes are discussed. Correlation studies with the test data, together with the development and applications of a set of efficient finite element model checkout procedures, are demonstrated on a large helicopter airframe finite element model. Finally, the lessons learned and the benefits resulting from this program are summarized.

  1. An Automated Method for Landmark Identification and Finite-Element Modeling of the Lumbar Spine.

    PubMed

    Campbell, Julius Quinn; Petrella, Anthony J

    2015-11-01

    The purpose of this study was to develop a method for the automated creation of finite-element models of the lumbar spine. Custom scripts were written to extract bone landmarks of lumbar vertebrae and assemble L1-L5 finite-element models. End-plate borders, ligament attachment points, and facet surfaces were identified. Landmarks were identified to maintain mesh correspondence between meshes for later use in statistical shape modeling. 90 lumbar vertebrae were processed creating 18 subject-specific finite-element models. Finite-element model surfaces and ligament attachment points were reproduced within 1e-5 mm of the bone surface, including the critical contact surfaces of the facets. Element quality exceeded specifications in 97% of elements for the 18 models created. The current method is capable of producing subject-specific finite-element models of the lumbar spine with good accuracy, quality, and robustness. The automated methods developed represent advancement in the state of the art of subject-specific lumbar spine modeling to a scale not possible with prior manual and semiautomated methods.

  2. Min and max are the only continuous ampersand-, V-operations for finite logics

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1992-01-01

    Experts usually express their degrees of belief in their statements by the words of a natural language (like 'maybe', 'perhaps', etc.). If an expert system contains the degrees of beliefs t(A) and t(B) that correspond to the statements A and B, and a user asks this expert system whether 'A&B' is true, then it is necessary to come up with a reasonable estimate for the degree of belief of A&B. The operation that processes t(A) and t(B) into such an estimate t(A&B) is called an &-operation. Many different &-operations have been proposed. Which of them to choose? This can be (in principle) done by interviewing experts and eliciting a &-operation from them, but such a process is very time-consuming and therefore, not always possible. So, usually, to choose a &-operation, the finite set of actually possible degrees of belief is extended to an infinite set (e.g., to an interval (0,1)), define an operation there, and then restrict this operation to the finite set. Only this original finite set is considered. It is shown that a reasonable assumption that an &-operation is continuous (i.e., that gradual change in t(A) and t(B) must lead to a gradual change in t(A&B)), uniquely determines min as an &-operation. Likewise, max is the only continuous V-operation. These results are in good accordance with the experimental analysis of 'and' and 'or' in human beliefs.

  3. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  4. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    DTIC Science & Technology

    2016-05-12

    valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A

  5. Statistical properties and condensate fluctuation of attractive Bose gas with finite number of particles

    NASA Astrophysics Data System (ADS)

    Bera, Sangita; Lekala, Mantile Leslie; Chakrabarti, Barnali; Bhattacharyya, Satadal; Rampho, Gaotsiwe Joel

    2017-09-01

    'We study the condensate fluctuation and several statistics of weakly interacting attractive Bose gas of 7 Li atoms in harmonic trap. Using exact recursion relation we calculate canonical ensemble partition function and study the thermal evolution of the condensate. As 7 Li condensate is associated with collapse, the number of condensate atom is truly finite and it facilitates to study the condensate in mesoscopic region. Being highly correlated, we utilize the two-body correlated basis function to get the many-body effective potential which is further used to calculate the energy levels. Taking van der Waals interaction as interatomic interaction we calculate several quantities like condensate fraction N, root-mean-square fluctuation δn0 and different orders of central moments. We observe the effect of finite size on the calculation of condensate fluctuations and the effect of attractive interaction over the noninteracting limit. We observe the depletion of the condensate with increase in temperature. The calculated moments nicely exhibit the mesoscopic effect. The sharp fall in the root-mean-square fluctuation near the critical point signifies the possibility of phase transition.

  6. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  7. Power flow as a complement to statistical energy analysis and finite element analysis

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1987-01-01

    Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.

  8. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  9. Renormalization-group theory for finite-size scaling in extreme statistics

    NASA Astrophysics Data System (ADS)

    Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  10. Simulation of Voltage SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Li, Yi-Jin

    2010-06-01

    A three-dimensional finite element model for phase change random access memory is established to simulate electric, thermal and phase state distribution during (SET) operation. The model is applied to simulate the SET behaviors of the heater addition structure (HS) and the ring-type contact in the bottom electrode (RIB) structure. The simulation results indicate that the small bottom electrode contactor (BEC) is beneficial for heat efficiency and reliability in the HS cell, and the bottom electrode contactor with size Fx = 80 nm is a good choice for the RIB cell. Also shown is that the appropriate SET pulse time is 100 ns for the low power consumption and fast operation.

  11. Self-consistent field theory simulations of polymers on arbitrary domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris

    2016-12-15

    We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less

  12. Accounting for patient variability in finite element analysis of the intact and implanted hip and knee: a review.

    PubMed

    Taylor, Mark; Bryan, Rebecca; Galloway, Francis

    2013-02-01

    It is becoming increasingly difficult to differentiate the performance of new joint replacement designs using available preclinical test methods. Finite element analysis is commonly used and the majority of published studies are performed on representative anatomy, assuming optimal implant placement, subjected to idealised loading conditions. There are significant differences between patients and accounting for this variability will lead to better assessment of the risk of failure. This review paper provides a comprehensive overview of the techniques available to account for patient variability. There is a brief overview of patient-specific model generation techniques, followed by a review of multisubject patient-specific studies performed on the intact and implanted femur and tibia. In particular, the challenges and limitations of manually generating models for such studies are discussed. To efficiently account for patient variability, the application of statistical shape and intensity models (SSIM) are being developed. Such models have the potential to synthetically generate thousands of representative models generated from a much smaller training set. Combined with the automation of the prosthesis implantation process, SSIM provides a potentially powerful tool for assessing the next generation of implant designs. The potential application of SSIM are discussed along with their limitations. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  14. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  15. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  16. Atomistic Cohesive Zone Models for Interface Decohesion in Metals

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.; Saether, Erik; Glaessgen, Edward H.

    2009-01-01

    Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship characterizing the load transfer across the plane of a growing edge crack is extracted from atomistic simulations for use within a continuum finite element model. The methodology for the atomistic derivation of a cohesive-zone law is presented. This procedure can be implemented to build cohesive-zone finite element models for simulating fracture in nanocrystalline or ultrafine grained materials.

  17. Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.

  18. Shock and Vibration Symposium (59th) Held in Albuquerque, New Mexico on 18-20 October 1988. Volume 3

    DTIC Science & Technology

    1988-10-01

    N. F. Rieger Statistical Energy Analysis : An Overview of Its Development and Engineering Applications J. E. Manning DATA BASES DOE/DOD Environmental...Vibroacoustic Response Using the Finite Element Method and Statistical Energy Analysis F. L. Gloyna Study of Helium Effect on Spacecraft Random Vibration...Analysis S. A. Wilkerson vi DYNAMIC ANALYSIS Modeling of Vibration Transmission in a Damped Beam Structure Using Statistical Energy Analysis S. S

  19. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  20. The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan

    1995-01-01

    The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.

  1. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  2. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  3. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Simulation of SET Operation in Phase-Change Random Access Memories with Heater Addition and Ring-Type Contactor for Low-Power Consumption by Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Gong, Yue-Feng; Song, Zhi-Tang; Ling, Yun; Liu, Yan; Feng, Song-Lin

    2009-11-01

    A three-dimensional finite element model for phase change random access memory (PCRAM) is established for comprehensive electrical and thermal analysis during SET operation. The SET behaviours of the heater addition structure (HS) and the ring-type contact in bottom electrode (RIB) structure are compared with each other. There are two ways to reduce the RESET current, applying a high resistivity interfacial layer and building a new device structure. The simulation results indicate that the variation of SET current with different power reduction ways is little. This study takes the RESET and SET operation current into consideration, showing that the RIB structure PCRAM cell is suitable for future devices with high heat efficiency and high-density, due to its high heat efficiency in RESET operation.

  4. Statespace geometry of puff formation in pipe flow

    NASA Astrophysics Data System (ADS)

    Budanur, Nazmi Burak; Hof, Bjoern

    2017-11-01

    Localized patches of chaotically moving fluid known as puffs play a central role in the transition to turbulence in pipe flow. Puffs coexist with the laminar flow and their large-scale dynamics sets the critical Reynolds number: When the rate of puff splitting exceeds that of decaying, turbulence in a long pipe becomes sustained in a statistical sense. Since puffs appear despite the linear stability of the Hagen-Poiseuille flow, one expects them to emerge from the bifurcations of finite-amplitude solutions of Navier-Stokes equations. In numerical simulations of pipe flow, Avila et al., discovered a pair of streamwise localized relative periodic orbits, which are time-periodic solutions with spatial drifts. We combine symmetry reduction and Poincaré section methods to compute the unstable manifolds of these orbits, revealing statespace structures associated with different stages of puff formation.

  5. Space use by foragers consuming renewable resources

    NASA Astrophysics Data System (ADS)

    Abramson, Guillermo; Kuperman, Marcelo N.; Morales, Juan M.; Miller, Joel C.

    2014-05-01

    We study a simple model of a forager as a walk that modifies a relaxing substrate. Within it simplicity, this provides an insight on a number of relevant and non-intuitive facts. Even without memory of the good places to feed and no explicit cost of moving, we observe the emergence of a finite home range. We characterize the walks and the use of resources in several statistical ways, involving the behavior of the average used fraction of the system, the length of the cycles followed by the walkers, and the frequency of visits to plants. Preliminary results on population effects are explored by means of a system of two non directly interacting animals. Properties of the overlap of home ranges show the existence of a set of parameters that provides the best utilization of the shared resource.

  6. The pdf approach to turbulent flow

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1990-01-01

    This paper provides a detailed discussion of the theory and application of probability density function (pdf) methods, which provide a complete statistical description of turbulent flow fields at a single point or a finite number of points. The basic laws governing the flow of Newtonian fluids are set up in the Eulerian and the Lagrangian frame, and the exact and linear equations for the characteristic functionals in those frames are discussed. Pdf equations in both frames are derived as Fourier transforms of the equations of the characteristic functions. Possible formulations for the nonclosed terms in the pdf equation are discussed, their properties are assessed, and closure modes for the molecular-transport and the fluctuating pressure-gradient terms are reviewed. The application of pdf methods to turbulent combustion flows, supersonic flows, and the interaction of turbulence with shock waves is discussed.

  7. The Highly Adaptive Lasso Estimator

    PubMed Central

    Benkeser, David; van der Laan, Mark

    2017-01-01

    Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111

  8. Optimal SSN Tasking to Enhance Real-time Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.

    2016-09-01

    Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.

  9. Evidence for a Finite-Temperature Insulator.

    PubMed

    Ovadia, M; Kalok, D; Tamir, I; Mitra, S; Sacépé, B; Shahar, D

    2015-08-27

    In superconductors the zero-resistance current-flow is protected from dissipation at finite temperatures (T) by virtue of the short-circuit condition maintained by the electrons that remain in the condensed state. The recently suggested finite-T insulator and the "superinsulating" phase are different because any residual mechanism of conduction will eventually become dominant as the finite-T insulator sets-in. If the residual conduction is small it may be possible to observe the transition to these intriguing states. We show that the conductivity of the high magnetic-field insulator terminating superconductivity in amorphous indium-oxide exhibits an abrupt drop, and seem to approach a zero conductance at T < 0.04 K. We discuss our results in the light of theories that lead to a finite-T insulator.

  10. Improved Life Prediction of Turbine Engine Components Using a Finite Element Based Software Called Zencrack

    DTIC Science & Technology

    2003-09-01

    application .................................................. 5-42 5.10 Different materials within crack-block...5-30 Figure 5-29 - Application of required user edge node sets... applications . Users have at their disposal all of the capabilities within these finite element programs and may, if desired, include any number of

  11. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  12. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  13. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  14. Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182

  15. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  16. Two characteristic temperatures for a Bose-Einstein condensate of a finite number of particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idziaszek, Z.; Institut fuer Theoretische Physik, Universitaet Hannover, D-30167 Hannover,; Rzazewski, K.

    2003-09-01

    We consider two characteristic temperatures for a Bose-Einstein condensate, which are related to certain properties of the condensate statistics. We calculate them for an ideal gas confined in power-law traps and show that they approach the critical temperature in the limit of large number of particles. The considered characteristic temperatures can be useful in the studies of Bose-Einstein condensates of a finite number of atoms indicating the point of a phase transition.

  17. Entropy Change for the Irreversible Heat Transfer between Two Finite Objects

    DTIC Science & Technology

    2015-06-10

    independent heat capacities. Another interesting aspect of this problem is to compute the entropy change during the process. Textbooks typically only...case where the two objects have unequal heat capacities, both of which are finite. (From a calculus point of view, each time an increment of heat dQ is...Theory, and Statistical Thermodynamics 3rd edn (Reading MA: Addison-Wesley) ch 5 [3] Larson R and Edwards B 2014 Calculus 10th edn (Boston MA: Brooks

  18. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  19. Proceedings of the NASTRAN (Tradename) Users’ Colloquium (15th) Held in Kansas City, Missouri on 4-8 May 1987

    DTIC Science & Technology

    1987-08-01

    HVAC duct hanger system over an extensive frequency range. The finite element, component mode synthesis, and statistical energy analysis methods are...800-5,000 Hz) analysis was conducted with Statistical Energy Analysis (SEA) coupled with a closed-form harmonic beam analysis program. These...resonances may be obtained by using a finer frequency increment. Statistical Energy Analysis The basic assumption used in SEA analysis is that within each band

  20. Proceedings of the NASTRAN (Tradename) Users’ Colloquium (17th) Held in San Antonio, Texas on 24-28 April 1989

    DTIC Science & Technology

    1989-03-01

    statistical energy analysis , the finite clement method, and the power flow method. Experimental solutions are the most common in the literature. The authors of...to the added weights and inertias of the transducers attached to an experimental structure. Statistical energy analysis (SEA) is a computational method...Analysis and Diagnosis," Journal of Sound and Vibration, Vol. 115, No. 3, pp. 405-422 (1987). 8. Lyon, R.L., Statistical Energy Analysis of Dynamical Systems

  1. Self-consistent mean-field approach to the statistical level density in spherical nuclei

    NASA Astrophysics Data System (ADS)

    Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.

    2018-06-01

    A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.

  2. Evaluation of the 3D Finite Element Method Using a Tantalum Rod for Osteonecrosis of the Femoral Head

    PubMed Central

    Shi, Jingsheng; Chen, Jie; Wu, Jianguo; Chen, Feiyan; Huang, Gangyong; Wang, Zhan; Zhao, Guanglei; Wei, Yibing; Wang, Siqun

    2014-01-01

    Background The aim of this study was to contrast the collapse values of the postoperative weight-bearing areas of different tantalum rod implant positions, fibula implantation, and core decompression model and to investigate the advantages and disadvantages of tantalum rod implantation in different ranges of osteonecrosis in comparison with other methods. Material/Methods The 3D finite element method was used to establish the 3D finite element model of normal upper femur, 3D finite element model after tantalum rod implantation into different positions of the upper femur in different osteonecrosis ranges, and other 3D finite element models for simulating fibula implant and core decompression. Results The collapse values in the weight-bearing area of the femoral head of the tantalum rod implant model inside the osteonecrosis area, implant model in the middle of the osteonecrosis area, fibula implant model, and shortening implant model exhibited no statistically significant differences (p>0.05) when the osteonecrosis range was small (60°). The stress values on the artificial bone surface for the tantalum rod implant model inside the osteonecrosis area and the shortening implant model exhibited statistical significance (p<0.01). Conclusions Tantalum rod implantation into the osteonecrosis area can reduce the collapse values in the weight-bearing area when osteonecrosis of the femoral head (ONFH) was in a certain range, thereby obtaining better clinical effects. When ONFH was in a large range (120°), the tantalum rod implantation inside the osteonecrosis area, shortening implant or fibula implant can reduce the collapse values of the femoral head, as assessed by other methods. PMID:25479830

  3. Emergence of distributed coordination in the Kolkata Paise Restaurant problem with finite information

    NASA Astrophysics Data System (ADS)

    Ghosh, Diptesh; Chakrabarti, Anindya S.

    2017-10-01

    In this paper, we study a large-scale distributed coordination problem and propose efficient adaptive strategies to solve the problem. The basic problem is to allocate finite number of resources to individual agents in the absence of a central planner such that there is as little congestion as possible and the fraction of unutilized resources is reduced as far as possible. In the absence of a central planner and global information, agents can employ adaptive strategies that uses only a finite knowledge about the competitors. In this paper, we show that a combination of finite information sets and reinforcement learning can increase the utilization fraction of resources substantially.

  4. Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies

    PubMed Central

    Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung

    2017-01-01

    A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods’ performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. PMID:27993672

  5. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    NASA Astrophysics Data System (ADS)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  6. Modeling of high pressure arc-discharge with a fully-implicit Navier-Stokes stabilized finite element flow solver

    NASA Astrophysics Data System (ADS)

    Sahai, A.; Mansour, N. N.; Lopez, B.; Panesi, M.

    2017-05-01

    This work addresses the modeling of high pressure electric discharge in an arc-heated wind tunnel. The combined numerical solution of Poisson’s equation, radiative transfer equations, and the set of Favre-averaged thermochemical nonequilibrium Navier-Stokes equations allows for the determination of the electric, radiation, and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles with the Chapman-Enskog method. A multi-temperature formulation is used to account for thermal non-equilibrium. Finally, the turbulence closure of the flow equations is obtained by means of the Spalart-Allmaras model, which requires the solution of an additional scalar transport equation. A Streamline upwind Petrov-Galerkin stabilized finite element formulation is employed to solve the Navier-Stokes equation. The electric field equation is solved using the standard Galerkin formulation. A stable formulation for the radiative transfer equations is obtained using the least-squares finite element method. The developed simulation framework has been applied to investigate turbulent plasma flows in the 20 MW Aerodynamic Heating Facility at NASA Ames Research Center. The current model is able to predict the process of energy addition and re-distribution due to Joule heating and thermal radiation, resulting in a hot central core surrounded by colder flow. The use of an unsteady three-dimensional treatment also allows the asymmetry due to a dynamic electric arc attachment point in the cathode chamber to be captured accurately. The current work paves the way for detailed estimation of operating characteristics for arc-heated wind tunnels which are critical in testing thermal protection systems.

  7. Ancestral Relationships Using Metafounders: Finite Ancestral Populations and Across Population Relationships.

    PubMed

    Legarra, Andres; Christensen, Ole F; Vitezica, Zulma G; Aguilar, Ignacio; Misztal, Ignacy

    2015-06-01

    Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a "metafounder," a pseudo-individual included as founder of the pedigree and similar to an "unknown parent group." Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. Copyright © 2015 by the Genetics Society of America.

  8. The intermediates take it all: asymptotics of higher criticism statistics and a powerful alternative based on equal local levels.

    PubMed

    Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut

    2015-01-01

    The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Statistical Mechanics of Coherent Ising Machine — The Case of Ferromagnetic and Finite-Loading Hopfield Models —

    NASA Astrophysics Data System (ADS)

    Aonishi, Toru; Mimura, Kazushi; Utsunomiya, Shoko; Okada, Masato; Yamamoto, Yoshihisa

    2017-10-01

    The coherent Ising machine (CIM) has attracted attention as one of the most effective Ising computing architectures for solving large scale optimization problems because of its scalability and high-speed computational ability. However, it is difficult to implement the Ising computation in the CIM because the theories and techniques of classical thermodynamic equilibrium Ising spin systems cannot be directly applied to the CIM. This means we have to adapt these theories and techniques to the CIM. Here we focus on a ferromagnetic model and a finite loading Hopfield model, which are canonical models sharing a common mathematical structure with almost all other Ising models. We derive macroscopic equations to capture nonequilibrium phase transitions in these models. The statistical mechanical methods developed here constitute a basis for constructing evaluation methods for other Ising computation models.

  10. Statistical mechanics of self-driven Carnot cycles.

    PubMed

    Smith, E

    1999-10-01

    The spontaneous generation and finite-amplitude saturation of sound, in a traveling-wave thermoacoustic engine, are derived as properties of a second-order phase transition. It has previously been argued that this dynamical phase transition, called "onset," has an equivalent equilibrium representation, but the saturation mechanism and scaling were not computed. In this work, the sound modes implementing the engine cycle are coarse-grained and statistically averaged, in a partition function derived from microscopic dynamics on criteria of scale invariance. Self-amplification performed by the engine cycle is introduced through higher-order modal interactions. Stationary points and fluctuations of the resulting phenomenological Lagrangian are analyzed and related to background dynamical currents. The scaling of the stable sound amplitude near the critical point is derived and shown to arise universally from the interaction of finite-temperature disorder, with the order induced by self-amplification.

  11. Stability and Existence Results for Quasimonotone Quasivariational Inequalities in Finite Dimensional Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellani, Marco; Giuli, Massimiliano, E-mail: massimiliano.giuli@univaq.it

    2016-02-15

    We study pseudomonotone and quasimonotone quasivariational inequalities in a finite dimensional space. In particular we focus our attention on the closedness of some solution maps associated to a parametric quasivariational inequality. From this study we derive two results on the existence of solutions of the quasivariational inequality. On the one hand, assuming the pseudomonotonicity of the operator, we get the nonemptiness of the set of the classical solutions. On the other hand, we show that the quasimonoticity of the operator implies the nonemptiness of the set of nonzero solutions. An application to traffic network is also considered.

  12. Debris Object Orbit Initialization Using the Probabilistic Admissible Region with Asynchronous Heterogeneous Observations

    NASA Astrophysics Data System (ADS)

    Zaidi, W. H.; Faber, W. R.; Hussein, I. I.; Mercurio, M.; Roscoe, C. W. T.; Wilkins, M. P.

    One of the most challenging problems in treating space debris is the characterization of the orbit of a newly detected and uncorrelated measurement. The admissible region is defined as the set of physically acceptable orbits (i.e. orbits with negative energies) consistent with one or more measurements of a Resident Space Object (RSO). Given additional constraints on the orbital semi-major axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a Probabilistic Admissible Region (PAR), a concept introduced in 2014 as a Monte Carlo uncertainty representation approach using topocentric spherical coordinates. Ultimately, a PAR can be used to initialize a sequential Bayesian estimator and to prioritize orbital propagations in a multiple hypothesis tracking framework such as Finite Set Statistics (FISST). To date, measurements used to build the PAR have been collected concurrently and by the same sensor. In this paper, we allow measurements to have different time stamps. We also allow for non-collocated sensor collections; optical data can be collected by one sensor at a given time and radar data collected by another sensor located elsewhere. We then revisit first principles to link asynchronous optical and radar measurements using both the conservation of specific orbital energy and specific orbital angular momentum. The result from the proposed algorithm is an implicit-Bayesian and non-Gaussian representation of orbital state uncertainty.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  14. Solid/FEM integration at SNLA

    NASA Technical Reports Server (NTRS)

    Chavez, Patrick F.

    1987-01-01

    The effort at Sandia National Labs. on the methodologies and techniques being used to generate strict hexahedral finite element meshes from a solid model is described. The functionality of the modeler is used to decompose the solid into a set of nonintersecting meshable finite element primitives. The description of the decomposition is exported, via a Boundary Representative format, to the meshing program which uses the information for complete finite element model specification. Particular features of the program are discussed in some detail along with future plans for development which includes automation of the decomposition using artificial intelligence techniques.

  15. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  16. Symbolic Dynamics, Flower Automata and Infinite Traces

    NASA Astrophysics Data System (ADS)

    Foryś, Wit; Oprocha, Piotr; Bakalarski, Slawomir

    Considering a finite alphabet as a set of allowed instructions, we can identify finite words with basic actions or programs. Hence infinite paths on a flower automaton can represent order in which these programs are executed and a flower shift related with it represents list of instructions to be executed at some mid-point of the computation.

  17. Nonlinear Control Systems

    DTIC Science & Technology

    2007-03-01

    Finite -dimensional regulators for a class of infinite dimensional systems ,” Systems and Control Letters, 3 (1983), 7-12. [11] B...semiglobal stabilizability by encoded state feedback,” to appear in Systems and Control Letters. 22 29. C. De Persis, A. Isidori, “Global stabilization of...nonequilibrium setting, for both finite and infinite dimensional control systems . Our objectives for distributed parameter systems included

  18. Quantum weak turbulence with applications to semiconductor lasers

    NASA Astrophysics Data System (ADS)

    Lvov, Yuri Victorovich

    Based on a model Hamiltonian appropriate for the description of fermionic systems such as semiconductor lasers, we describe a natural asymptotic closure of the BBGKY hierarchy in complete analogy with that derived for classical weak turbulence. The main features of the interaction Hamiltonian are the inclusion of full Fermi statistics containing Pauli blocking and a simple, phenomenological, uniformly weak two particle interaction potential equivalent to the static screening approximation. The resulting asymytotic closure and quantum kinetic Boltzmann equation are derived in a self consistent manner without resorting to a priori statistical hypotheses or cumulant discard assumptions. We find a new class of solutions to the quantum kinetic equation which are analogous to the Kolmogorov spectra of hydrodynamics and classical weak turbulence. They involve finite fluxes of particles and energy across momentum space and are particularly relevant for describing the behavior of systems containing sources and sinks. We explore these solutions by using differential approximation to collision integral. We make a prima facie case that these finite flux solutions can be important in the context of semiconductor lasers. We show that semiconductor laser output efficiency can be improved by exciting these finite flux solutions. Numerical simulations of the semiconductor Maxwell Bloch equations support the claim.

  19. Cascading process in the flute-mode turbulence of a plasma

    NASA Technical Reports Server (NTRS)

    Gonzalez, R.; Gomez, D.; Fontan, C. F.; Schifino, A. C. S.; Montagne, R.

    1993-01-01

    The cascades of ideal invariants in the flute-mode turbulence are analyzed by considering a statistics based on an elementary three-mode coupling process. The statistical dynamics of the system is investigated on the basis of the existence of the physically most important (PMI) triad. When finite ion Larmor radius effects are considered, the PMI triad describes the formation of zonal flows.

  20. Critical weight statistics of the random energy model and of the directed polymer on the Cayley tree

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile; Garel, Thomas

    2007-05-01

    We consider the critical point of two mean-field disordered models: (i) the random energy model (REM), introduced by Derrida as a mean-field spin-glass model of N spins and (ii) the directed polymer of length N on a Cayley Tree (DPCT) with random bond energies. Both models are known to exhibit a freezing transition between a high-temperature phase where the entropy is extensive and a low-temperature phase of finite entropy, where the weight statistics coincides with the weight statistics of Lévy sums with index μ=T/Tc<1 . In this paper, we study the weight statistics at criticality via the entropy S=-∑wilnwi and the generalized moments Yk=∑wik , where the wi are the Boltzmann weights of the 2N configurations. In the REM, we find that the critical weight statistics is governed by the finite-size exponent ν=2 : the entropy scales as Smacr N(Tc)˜N1/2 , the typical values elnYk¯ decay as N-k/2 , and the disorder-averaged values Yk¯ are governed by rare events and decay as N-1/2 for any k>1 . For the DPCT, we find that the entropy scales similarly as Smacr N(Tc)˜N1/2 , whereas another exponent ν'=1 governs the Yk statistics: the typical values elnYk¯ decay as N-k , and the disorder-averaged values Yk¯ decay as N-1 for any k>1 . As a consequence, the asymptotic probability distribution π¯N=∞(q) of the overlap q , in addition to the delta function δ(q) , which bears the whole normalization, contains an isolated point at q=1 , as a memory of the delta peak (1-T/Tc)δ(q-1) of the low-temperature phase T

  1. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  2. Observer-based robust finite time H∞ sliding mode control for Markovian switching systems with mode-dependent time-varying delay and incomplete transition rate.

    PubMed

    Gao, Lijun; Jiang, Xiaoxiao; Wang, Dandan

    2016-03-01

    This paper investigates the problem of robust finite time H∞ sliding mode control for a class of Markovian switching systems. The system is subjected to the mode-dependent time-varying delay, partly unknown transition rate and unmeasurable state. The main difficulty is that, a sliding mode surface cannot be designed based on the unknown transition rate and unmeasurable state directly. To overcome this obstacle, the set of modes is firstly divided into two subsets standing for known transition rate subset and unknown one, based on which a state observer is established. A component robust finite-time sliding mode controller is also designed to cope with the effect of partially unknown transition rate. It is illustrated that the reachability, finite-time stability, finite-time boundedness, finite-time H∞ state feedback stabilization of sliding mode dynamics can be ensured despite the unknown transition rate. Finally, the simulation results verify the effectiveness of robust finite time control problem. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Determining Definitions for Comparing Cardinalities

    ERIC Educational Resources Information Center

    Shipman, B. A.

    2012-01-01

    Through a series of six guided classroom discoveries, students create, via targeted questions, a definition for deciding when two sets have the same cardinality. The program begins by developing basic facts about cardinalities of finite sets. Extending two of these facts to infinite sets yields two statements on comparing infinite cardinalities…

  4. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  5. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    NASA Astrophysics Data System (ADS)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  6. Finite element analysis of thrust angle contact ball slewing bearing

    NASA Astrophysics Data System (ADS)

    Deng, Biao; Guo, Yuan; Zhang, An; Tang, Shengjin

    2017-12-01

    In view of the large heavy slewing bearing no longer follows the rigid ring hupothesis under the load condition, the entity finite element model of thrust angular contact ball bearing was established by using finite element analysis software ANSYS. The boundary conditions of the model were set according to the actual condition of slewing bearing, the internal stress state of the slewing bearing was obtained by solving and calculation, and the calculated results were compared with the numerical results based on the rigid ring assumption. The results show that more balls are loaded in the result of finite element method, and the maximum contact stresses between the ball and raceway have some reductions. This is because the finite element method considers the ferrule as an elastic body. The ring will produce structure deformation in the radial plane when the heavy load slewing bearings are subjected to external loads. The results of the finite element method are more in line with the actual situation of the slewing bearing in the engineering.

  7. Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Luo, Li-Shi

    2007-01-01

    In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.

  8. Vertical discretization with finite elements for a global hydrostatic model on the cubed sphere

    NASA Astrophysics Data System (ADS)

    Yi, Tae-Hyeong; Park, Ja-Rin

    2017-06-01

    A formulation of Galerkin finite element with basis-spline functions on a hybrid sigma-pressure coordinate is presented to discretize the vertical terms of global Eulerian hydrostatic equations employed in a numerical weather prediction system, which is horizontally discretized with high-order spectral elements on a cubed sphere grid. This replaces the vertical discretization of conventional central finite difference that is first-order accurate in non-uniform grids and causes numerical instability in advection-dominant flows. Therefore, a model remains in the framework of Galerkin finite elements for both the horizontal and vertical spatial terms. The basis-spline functions, obtained from the de-Boor algorithm, are employed to derive both the vertical derivative and integral operators, since Eulerian advection terms are involved. These operators are used to discretize the vertical terms of the prognostic and diagnostic equations. To verify the vertical discretization schemes and compare their performance, various two- and three-dimensional idealized cases and a hindcast case with full physics are performed in terms of accuracy and stability. It was shown that the vertical finite element with the cubic basis-spline function is more accurate and stable than that of the vertical finite difference, as indicated by faster residual convergence, fewer statistical errors, and reduction in computational mode. This leads to the general conclusion that the overall performance of a global hydrostatic model might be significantly improved with the vertical finite element.

  9. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.

  10. Nilpotent symmetries in supergroup field cosmology

    NASA Astrophysics Data System (ADS)

    Upadhyay, Sudhaker

    2015-06-01

    In this paper, we study the gauge invariance of the third quantized supergroup field cosmology which is a model for multiverse. Further, we propose both the infinitesimal (usual) as well as the finite superfield-dependent BRST symmetry transformations which leave the effective theory invariant. The effects of finite superfield-dependent BRST transformations on the path integral (so-called void functional in the case of third quantization) are implemented. Within the finite superfield-dependent BRST formulation, the finite superfield-dependent BRST transformations with specific parameter switch the void functional from one gauge to another. We establish this result for the most general gauge with the help of explicit calculations which holds for all possible sets of gauge choices at both the classical and the quantum levels.

  11. Sensor Compromise Detection in Multiple-Target Tracking Systems

    PubMed Central

    Doucette, Emily A.; Curtis, Jess W.

    2018-01-01

    Tracking multiple targets using a single estimator is a problem that is commonly approached within a trusted framework. There are many weaknesses that an adversary can exploit if it gains control over the sensors. Because the number of targets that the estimator has to track is not known with anticipation, an adversary could cause a loss of information or a degradation in the tracking precision. Other concerns include the introduction of false targets, which would result in a waste of computational and material resources, depending on the application. In this work, we study the problem of detecting compromised or faulty sensors in a multiple-target tracker, starting with the single-sensor case and then considering the multiple-sensor scenario. We propose an algorithm to detect a variety of attacks in the multiple-sensor case, via the application of finite set statistics (FISST), one-class classifiers and hypothesis testing using nonparametric techniques. PMID:29466314

  12. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  13. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  14. Interdependence of specialization and biodiversity in Phanerozoic marine invertebrates.

    PubMed

    Nürnberg, Sabine; Aberhan, Martin

    2015-03-17

    Studies of the dynamics of biodiversity often suggest that diversity has upper limits, but the complex interplay between ecological and evolutionary processes and the relative role of biotic and abiotic factors that set upper limits to diversity are poorly understood. Here we statistically assess the relationship between global biodiversity and the degree of habitat specialization of benthic marine invertebrates over the Phanerozoic eon. We show that variation in habitat specialization correlates positively with changes in global diversity, that is, times of high diversity coincide with more specialized faunas. We identify the diversity dynamics of specialists but not generalists, and origination rates but not extinction rates, as the main drivers of this ecological interdependence. Abiotic factors fail to show any significant relationship with specialization. Our findings suggest that the overall level of specialization and its fluctuations over evolutionary timescales are controlled by diversity-dependent processes--driven by interactions between organisms competing for finite resources.

  15. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  16. Humans Rapidly Learn Grammatical Structure in a New Musical Scale

    PubMed Central

    Loui, Psyche; Wessel, David L.; Hudson Kam, Carla L.

    2010-01-01

    Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25–30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music. PMID:20740059

  17. Humans Rapidly Learn Grammatical Structure in a New Musical Scale.

    PubMed

    Loui, Psyche; Wessel, David L; Hudson Kam, Carla L

    2010-06-01

    Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25-30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music.

  18. Thermal machines beyond the weak coupling regime

    NASA Astrophysics Data System (ADS)

    Gallego, R.; Riera, A.; Eisert, J.

    2014-12-01

    How much work can be extracted from a heat bath using a thermal machine? The study of this question has a very long history in statistical physics in the weak-coupling limit, when applied to macroscopic systems. However, the assumption that thermal heat baths remain uncorrelated with associated physical systems is less reasonable on the nano-scale and in the quantum setting. In this work, we establish a framework of work extraction in the presence of quantum correlations. We show in a mathematically rigorous and quantitative fashion that quantum correlations and entanglement emerge as limitations to work extraction compared to what would be allowed by the second law of thermodynamics. At the heart of the approach are operations that capture the naturally non-equilibrium dynamics encountered when putting physical systems into contact with each other. We discuss various limits that relate to known results and put our work into the context of approaches to finite-time quantum thermodynamics.

  19. Cooperation Dilemma in Finite Populations under Fluctuating Environments

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Mobilia, Mauro; Roberts, Elijah

    2013-12-01

    We present a novel approach allowing the study of rare events like fixation under fluctuating environments, modeled as extrinsic noise, in evolutionary processes characterized by the dominance of one species. Our treatment consists of mapping the system onto an auxiliary model, exhibiting metastable species coexistence, that can be analyzed semiclassically. This approach enables us to study the interplay between extrinsic and demographic noise on the statistics of interest. We illustrate our theory by considering the paradigmatic prisoner’s dilemma game, whose evolution is described by the probability that cooperators fixate the population and replace all defectors. We analytically and numerically demonstrate that extrinsic noise may drastically enhance the cooperation fixation probability and even change its functional dependence on the population size. These results, which generalize earlier works in population genetics, indicate that extrinsic noise may help sustain and promote a much higher level of cooperation than static settings.

  20. Commentary on Steinley and Brusco (2011): Recommendations and Cautions

    ERIC Educational Resources Information Center

    McLachlan, Geoffrey J.

    2011-01-01

    I discuss the recommendations and cautions in Steinley and Brusco's (2011) article on the use of finite models to cluster a data set. In their article, much use is made of comparison with the "K"-means procedure. As noted by researchers for over 30 years, the "K"-means procedure can be viewed as a special case of finite mixture modeling in which…

  1. Material nonlinear analysis via mixed-iterative finite element method

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1992-01-01

    The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.

  2. Second-order numerical solution of time-dependent, first-order hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Shah, Patricia L.; Hardin, Jay

    1995-01-01

    A finite difference scheme is developed to find an approximate solution of two similar hyperbolic equations, namely a first-order plane wave and spherical wave problem. Finite difference approximations are made for both the space and time derivatives. The result is a conditionally stable equation yielding an exact solution when the Courant number is set to one.

  3. Anatomically Realistic Three-Dimensional Meshes of the Pelvic Floor & Anal Canal for Finite Element Analysis

    PubMed Central

    Noakes, Kimberley F.; Bissett, Ian P.; Pullan, Andrew J.; Cheng, Leo K.

    2014-01-01

    Three anatomically realistic meshes, suitable for finite element analysis, of the pelvic floor and anal canal regions have been developed to provide a framework with which to examine the mechanics, via finite element analysis of normal function within the pelvic floor. Two cadaver-based meshes were produced using the Visible Human Project (male and female) cryosection data sets, and a third mesh was produced based on MR image data from a live subject. The Visible Man (VM) mesh included 10 different pelvic structures while the Visible Woman and MRI meshes contained 14 and 13 structures respectively. Each image set was digitized and then finite element meshes were created using an iterative fitting procedure with smoothing constraints calculated from ‘L’-curves. These weights produced accurate geometric meshes of each pelvic structure with average Root Mean Square (RMS) fitting errors of less than 1.15 mm. The Visible Human cadaveric data provided high resolution images, however, the cadaveric meshes lacked the normal dynamic form of living tissue and suffered from artifacts related to postmortem changes. The lower resolution MRI mesh was able to accurately portray structure of the living subject and paves the way for dynamic, functional modeling. PMID:18317929

  4. Finite-difference modeling of the electroseismic logging in a fluid-saturated porous formation

    NASA Astrophysics Data System (ADS)

    Guan, Wei; Hu, Hengshan

    2008-05-01

    In a fluid-saturated porous medium, an electromagnetic (EM) wavefield induces an acoustic wavefield due to the electrokinetic effect. A potential geophysical application of this effect is electroseismic (ES) logging, in which the converted acoustic wavefield is received in a fluid-filled borehole to evaluate the parameters of the porous formation around the borehole. In this paper, a finite-difference scheme is proposed to model the ES logging responses to a vertical low frequency electric dipole along the borehole axis. The EM field excited by the electric dipole is calculated separately by finite-difference first, and is considered as a distributed exciting source term in a set of extended Biot's equations for the converted acoustic wavefield in the formation. This set of equations is solved by a modified finite-difference time-domain (FDTD) algorithm that allows for the calculation of dynamic permeability so that it is not restricted to low-frequency poroelastic wave problems. The perfectly matched layer (PML) technique without splitting the fields is applied to truncate the computational region. The simulated ES logging waveforms approximately agree with those obtained by the analytical method. The FDTD algorithm applies also to acoustic logging simulation in porous formations.

  5. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  6. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    NASA Astrophysics Data System (ADS)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  7. Infinity Computer and Calculus

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.

    2007-09-01

    Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this survey talk, a new computational methodology (that is not related to nonstandard analysis) is described. It is based on the principle `The part is less than the whole' applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The new methodology allows us to introduce the Infinity Computer working with all these numbers (its simulator is presented during the lecture). The new computational paradigm both gives possibilities to execute computations of a new type and simplifies fields of mathematics where infinity and/or infinitesimals are encountered. Numerous examples of the usage of the introduced computational tools are given during the lecture.

  8. Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling

    NASA Astrophysics Data System (ADS)

    Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo

    2017-06-01

    Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.

  9. Large-deviation joint statistics of the finite-time Lyapunov spectrum in isotropic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Perry L., E-mail: pjohns86@jhu.edu; Meneveau, Charles

    2015-08-15

    One of the hallmarks of turbulent flows is the chaotic behavior of fluid particle paths with exponentially growing separation among them while their distance does not exceed the viscous range. The maximal (positive) Lyapunov exponent represents the average strength of the exponential growth rate, while fluctuations in the rate of growth are characterized by the finite-time Lyapunov exponents (FTLEs). In the last decade or so, the notion of Lagrangian coherent structures (which are often computed using FTLEs) has gained attention as a tool for visualizing coherent trajectory patterns in a flow and distinguishing regions of the flow with different mixingmore » properties. A quantitative statistical characterization of FTLEs can be accomplished using the statistical theory of large deviations, based on the so-called Cramér function. To obtain the Cramér function from data, we use both the method based on measuring moments and measuring histograms and introduce a finite-size correction to the histogram-based method. We generalize the existing univariate formalism to the joint distributions of the two FTLEs needed to fully specify the Lyapunov spectrum in 3D flows. The joint Cramér function of turbulence is measured from two direct numerical simulation datasets of isotropic turbulence. Results are compared with joint statistics of FTLEs computed using only the symmetric part of the velocity gradient tensor, as well as with joint statistics of instantaneous strain-rate eigenvalues. When using only the strain contribution of the velocity gradient, the maximal FTLE nearly doubles in magnitude, highlighting the role of rotation in de-correlating the fluid deformations along particle paths. We also extend the large-deviation theory to study the statistics of the ratio of FTLEs. The most likely ratio of the FTLEs λ{sub 1} : λ{sub 2} : λ{sub 3} is shown to be about 4:1:−5, compared to about 8:3:−11 when using only the strain-rate tensor for calculating fluid volume deformations. The results serve to characterize the fundamental statistical and geometric structure of turbulence at small scales including cumulative, time integrated effects. These are important for deformable particles such as droplets and polymers advected by turbulence.« less

  10. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S.more » These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].« less

  11. Entanglement entropy in Fermi gases and Anderson's orthogonality catastrophe.

    PubMed

    Ossipov, A

    2014-09-26

    We study the ground-state entanglement entropy of a finite subsystem of size L of an infinite system of noninteracting fermions scattered by a potential of finite range a. We derive a general relation between the scattering matrix and the overlap matrix and use it to prove that for a one-dimensional symmetric potential the von Neumann entropy, the Rényi entropies, and the full counting statistics are robust against potential scattering, provided that L/a≫1. The results of numerical calculations support the validity of this conclusion for a generic potential.

  12. Modeling the nonlinear dielectric response of glass formers

    NASA Astrophysics Data System (ADS)

    Buchenau, U.

    2017-06-01

    The recently developed pragmatical model of asymmetric double-well potentials with a finite lifetime is applied to nonlinear dielectric data in polar undercooled liquids. The viscous effects from the finite lifetime provide a crossover from the cooperative jumps of many molecules at short times to the motion of statistically independent molecules at long times. The model allows us to determine the size of cooperatively rearranging regions from nonlinear ω -data and throws new light on a known inconsistency between nonlinear ω and 3 ω -signals for glycerol and propylene carbonate.

  13. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  14. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  15. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  16. Intermittency in two-dimensional Ekman-Navier-Stokes turbulence.

    PubMed

    Boffetta, G; Celani, A; Musacchio, S; Vergassola, M

    2002-08-01

    We study the statistics of the vorticity field in two-dimensional Navier-Stokes turbulence with linear Ekman friction. We show that the small-scale vorticity fluctuations are intermittent, as conjectured by Bernard [Europhys. Lett. 50, 333 (2000)] and Nam et al. [Phys. Rev. Lett. 84, 5134 (2000)]. The small-scale statistics of vorticity fluctuations coincide with that of a passive scalar with finite lifetime transported by the velocity field itself.

  17. Statistical physics of the yielding transition in amorphous solids.

    PubMed

    Karmakar, Smarajit; Lerner, Edan; Procaccia, Itamar

    2010-11-01

    The art of making structural, polymeric, and metallic glasses is rapidly developing with many applications. A limitation is that under increasing external strain all amorphous solids (like their crystalline counterparts) have a finite yield stress which cannot be exceeded without effecting a plastic response which typically leads to mechanical failure. Understanding this is crucial for assessing the risk of failure of glassy materials under mechanical loads. Here we show that the statistics of the energy barriers ΔE that need to be surmounted changes from a probability distribution function that goes smoothly to zero as ΔE=0 to a pdf which is finite at ΔE=0 . This fundamental change implies a dramatic transition in the mechanical stability properties with respect to external strain. We derive exact results for the scaling exponents that characterize the magnitudes of average energy and stress drops in plastic events as a function of system size.

  18. Dependence of exponents on text length versus finite-size scaling for word-frequency distributions

    NASA Astrophysics Data System (ADS)

    Corral, Álvaro; Font-Clos, Francesc

    2017-08-01

    Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.

  19. Shielding Effectiveness in a Two-Dimensional Reverberation Chamber Using Finite-Element Techniques

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.

    2006-01-01

    Reverberation chambers are attaining an increased importance in determination of electromagnetic susceptibility of avionics equipment. Given the nature of the variable boundary condition, the ability of a given source to couple energy into certain modes and the passband characteristic due the chamber Q, the fields are typically characterized by statistical means. The emphasis of this work is to apply finite-element techniques at cutoff to the analysis of a two-dimensional structure to examine the notion of shielding-effectiveness issues in a reverberating environment. Simulated mechanical stirring will be used to obtain the appropriate statistical field distribution. The shielding effectiveness (SE) in a simulated reverberating environment is compared to measurements in a reverberation chamber. A log-normal distribution for the SE is observed with implications for system designers. The work is intended to provide further refinement in the consideration of SE in a complex electromagnetic environment.

  20. Statistics of backscatter radar return from vegetation

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Chen, K. S.; Fung, A. K.

    1992-01-01

    The statistical characteristics of radar return from vegetation targets are investigated through a simulation study based upon the first-order scattered field. For simulation purposes, the vegetation targets are modeled as a layer of randomly oriented and spaced finite cylinders, needles, or discs, or a combination of them. The finite cylinder is used to represent a branch or a trunk, the needle for a stem or a coniferous leaf, and the disc for a decidous leaf. For a plane wave illuminating a vegetation canopy, simulation results show that the signal returned from a layer of disc- or needle-shaped leaves follows the Gamma distribution, and that the signal returned from a layer of branches resembles the log normal distribution. The Gamma distribution also represents the signal returned from a layer of a mixture of branches and leaves regardless of the leaf shapes. Results also indicate that the polarization state does not have a significant impact on signal distribution.

  1. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  2. Local dependence in random graph models: characterization, properties and statistical inference

    PubMed Central

    Schweinberger, Michael; Handcock, Mark S.

    2015-01-01

    Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142

  3. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  4. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  5. Pretest 3-D finite element modeling of the wedge pillar portion of the WIPP (Waste Isolation Pilot Plant) Geomechanical Evaluation (Room G) in situ experiment. [Waste Isolation Pilot Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preece, D.S.

    Pretest 3-D finite element calculations have been performed on the wedge pillar portion of the WIPP Geomechanical Evaluation Experiment. The wedge pillar separates two drifts that intersect at an angle of 7.5/sup 0/. Purpose of the experiment is to provide data on the creep behavior of the wedge and progressive failure at the tip. The first set of calculations utilized a symmetry plane on the center-line of the wedge which allowed treatment of the entire configuration by modeling half of the geometry. Two 3-D calculations in this first set were performed with different drift widths to study the influence ofmore » drift size on closure and maximum stress. A cross-section perpendicular to the wedge was also analyzed with 2-D finite element models and the results compared to the 3-D results. In another set of 3-D calculations both drifts were modeled but with less distance between the drifts and the outer boundaries. Results of these calculations are compared with results from the other calculations to better understand the influence of boundary conditions.« less

  6. Modular Extensions of Unitary Braided Fusion Categories and 2+1D Topological/SPT Orders with Symmetries

    NASA Astrophysics Data System (ADS)

    Lan, Tian; Kong, Liang; Wen, Xiao-Gang

    2017-04-01

    A finite bosonic or fermionic symmetry can be described uniquely by a symmetric fusion category E. In this work, we propose that 2+1D topological/SPT orders with a fixed finite symmetry E are classified, up to {E_8} quantum Hall states, by the unitary modular tensor categories C over E and the modular extensions of each C. In the case C=E, we prove that the set M_{ext}(E) of all modular extensions of E has a natural structure of a finite abelian group. We also prove that the set M_{ext}(C) of all modular extensions of E, if not empty, is equipped with a natural M_{ext}(C)-action that is free and transitive. Namely, the set M_{ext}(C) is an M_{ext}(E)-torsor. As special cases, we explain in detail how the group M_{ext}(E) recovers the well-known group-cohomology classification of the 2+1D bosonic SPT orders and Kitaev's 16 fold ways. We also discuss briefly the behavior of the group M_{ext}(E) under the symmetry-breaking processes and its relation to Witt groups.

  7. Finite-time and fixed-time synchronization analysis of inertial memristive neural networks with time-varying delays.

    PubMed

    Wei, Ruoyu; Cao, Jinde; Alsaedi, Ahmed

    2018-02-01

    This paper investigates the finite-time synchronization and fixed-time synchronization problems of inertial memristive neural networks with time-varying delays. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, several sufficient conditions are derived to ensure finite-time synchronization of inertial memristive neural networks. Then, for the purpose of making the setting time independent of initial condition, we consider the fixed-time synchronization. A novel criterion guaranteeing the fixed-time synchronization of inertial memristive neural networks is derived. Finally, three examples are provided to demonstrate the effectiveness of our main results.

  8. Cantorian Set Theory and Teaching Prospective Teachers

    ERIC Educational Resources Information Center

    Narli, Serkan; Baser, Nes'e

    2008-01-01

    Infinity has contradictions arising from its nature. Since mind is actually adapted to finite realities attained by behaviors in space and time, when one starts to deal with real infinity, contradictions will arise. In particular, Cantorian Set Theory for it involves the notion of "equivalence of a set to one of its proper subsets,"…

  9. Cantorian Set Theory and Teaching Prospective Teachers

    ERIC Educational Resources Information Center

    Narli, Serkan; Baser, Nes'e

    2008-01-01

    Infinity has contradictions arising from its nature. Since mind is actually adapted to finite realities attained by behaviors in space and time, when one starts to deal with real infinity, contradictions will arise. In particular, Cantorian Set Theory, for it involves the notion of "equivalence of a set to one of its proper subsets," causes…

  10. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  11. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  12. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  13. Numerical approach for finite volume three-body interaction

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Gasparian, Vladimir

    2018-01-01

    In the present work, we study a numerical approach to one dimensional finite volume three-body interaction, the method is demonstrated by considering a toy model of three spinless particles interacting with pair-wise δ -function potentials. The numerical results are compared with the exact solutions of three spinless bosons interaction when the strength of short-range interactions are set equal for all pairs.

  14. Symbolic Dynamics and Grammatical Complexity

    NASA Astrophysics Data System (ADS)

    Hao, Bai-Lin; Zheng, Wei-Mou

    The following sections are included: * Formal Languages and Their Complexity * Formal Language * Chomsky Hierarchy of Grammatical Complexity * The L-System * Regular Language and Finite Automaton * Finite Automaton * Regular Language * Stefan Matrix as Transfer Function for Automaton * Beyond Regular Languages * Feigenbaum and Generalized Feigenbaum Limiting Sets * Even and Odd Fibonacci Sequences * Odd Maximal Primitive Prefixes and Kneading Map * Even Maximal Primitive Prefixes and Distinct Excluded Blocks * Summary of Results

  15. The accuracy of the Gaussian-and-finite-element-Coulomb (GFC) method for the calculation of Coulomb integrals.

    PubMed

    Przybytek, Michal; Helgaker, Trygve

    2013-08-07

    We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems; however, this scaling can be reduced to linear by introducing more effective techniques for recognizing significant three-center overlap distributions.

  16. A parallelized binary search tree

    USDA-ARS?s Scientific Manuscript database

    PTTRNFNDR is an unsupervised statistical learning algorithm that detects patterns in DNA sequences, protein sequences, or any natural language texts that can be decomposed into letters of a finite alphabet. PTTRNFNDR performs complex mathematical computations and its processing time increases when i...

  17. Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies.

    PubMed

    Liu, Jia; Duffy, Ben A; Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung

    2017-02-15

    A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods' performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Spatial Statistics of the Clark County Parcel Map, Trial Geotechnical Models, and Effects on Ground Motions in Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    Savran, W. H.; Louie, J. N.; Pullammanappallil, S.; Pancha, A.

    2011-12-01

    When deterministically modeling the propagation of seismic waves, shallow shear-wave velocity plays a crucial role in predicting shaking effects such as peak ground velocity (PGV). The Clark County Parcel Map provides us with a data set of geotechnical velocities in Las Vegas Valley, at an unprecedented level of detail. Las Vegas Valley is a basin with similar geologic properties to some areas of Southern California. We analyze elementary spatial statistical properties of the Parcel Map, along with calculating its spatial variability. We then investigate these spatial statistics from the PGV results computed from two geotechnical models that incorporate the Parcel Map as parameters. Plotting a histogram of the Parcel Map 30-meter depth-averaged shear velocity (Vs30) values shows the data to approximately fit a bimodal normal distribution with μ1 = 400 m/s, σ1 = 76 m/s, μ2 = 790 m/s, σ2 = 149 m/s, and p = 0.49., where μ is the mean, σ is standard deviation, and p is the probability mixing factor for the bimodal distribution. Based on plots of spatial power spectra, the Parcel Map appears to be fractal over the second and third decades, in kilometers. The spatial spectra possess the same fractal dimension in the N-S and the E-W directions, indicating isotropic scale invariance. We configured finite-difference wave propagation models at 0.5 Hz with LLNL's E3D code, utilizing the Parcel Map as input parameters to compute a PGV data set from a scenario earthquake (Black Hills M6.5). The resulting PGV is fractal over the same spatial frequencies as the Vs30 data sets associated with their respective models. The fractal dimension is systematically lower in all of the PGV maps as opposed to the Vs30 maps, showing that the PGV maps are richer in higher spatial frequencies. This is potentially caused by a lens focusing effects on seismic waves due to spatial heterogeneity in site conditions.

  19. Applications of finite-size scaling for atomic and non-equilibrium systems

    NASA Astrophysics Data System (ADS)

    Antillon, Edwin A.

    We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.

  20. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  1. Binary tree eigen solver in finite element analysis

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Janetzke, D. C.; Kiraly, L. J.

    1993-01-01

    This paper presents a transputer-based binary tree eigensolver for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on the method of recursive doubling, which parallel implementation of a number of associative operations on an arbitrary set having N elements is of the order of o(log2N), compared to (N-1) steps if implemented sequentially. The hardware used in the implementation of the binary tree consists of 32 transputers. The algorithm is written in OCCAM which is a high-level language developed with the transputers to address parallel programming constructs and to provide the communications between processors. The algorithm can be replicated to match the size of the binary tree transputer network. Parallel and sequential finite element analysis programs have been developed to solve for the set of the least-order eigenpairs using the modified subspace method. The speed-up obtained for a typical analysis problem indicates close agreement with the theoretical prediction given by the method of recursive doubling.

  2. Analysis of turbulent free-jet hydrogen-air diffusion flames with finite chemical reaction rates

    NASA Technical Reports Server (NTRS)

    Sislian, J. P.; Glass, I. I.; Evans, J. S.

    1979-01-01

    A numerical analysis is presented of the nonequilibrium flow field resulting from the turbulent mixing and combustion of an axisymmetric hydrogen jet in a supersonic parallel ambient air stream. The effective turbulent transport properties are determined by means of a two-equation model of turbulence. The finite-rate chemistry model considers eight elementary reactions among six chemical species: H, O, H2O, OH, O2 and H2. The governing set of nonlinear partial differential equations was solved by using an implicit finite-difference procedure. Radial distributions were obtained at two downstream locations for some important variables affecting the flow development, such as the turbulent kinetic energy and its dissipation rate. The results show that these variables attain their peak values on the axis of symmetry. The computed distribution of velocity, temperature, and mass fractions of the chemical species gives a complete description of the flow field. The numerical predictions were compared with two sets of experimental data. Good qualitative agreement was obtained.

  3. A unique set of micromechanics equations for high temperature metal matrix composites

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.; Chamis, C. C.

    1985-01-01

    A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.

  4. A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.

  5. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  6. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  7. A proposed technique for vehicle tracking, direction, and speed determination

    NASA Astrophysics Data System (ADS)

    Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.

    2004-12-01

    A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.

  8. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  9. A theoretical study in extracting the essential features and dynamics of molecular motions: Intrinsic geometry methods for PF(5) pseudorotations and statistical methods for argon clusters

    NASA Astrophysics Data System (ADS)

    Panahi, Nima S.

    We studied the problem of understanding and computing the essential features and dynamics of molecular motions through the development of two theories for two different systems. First, we studied the process of the Berry Pseudorotation of PF5 and the rotations it induces in the molecule through its natural and intrinsic geometric nature by setting it in the language of fiber bundles and graph theory. With these tools, we successfully extracted the essentials of the process' loops and induced rotations. The infinite number of pseudorotation loops were broken down into a small set of essential loops called "super loops", with their intrinsic properties and link to the physical movements of the molecule extensively studied. In addition, only the three "self-edge loops" generated any induced rotations, and then only a finite number of classes of them. Second, we studied applying the statistical methods of Principal Components Analysis (PCA) and Principal Coordinate Analysis (PCO) to capture only the most important changes in Argon clusters so as to reduce computational costs and graph the potential energy surface (PES) in three dimensions respectively. Both methods proved successful, but PCA was only partially successful since one will only see advantages for PES database systems much larger than those both currently being studied and those that can be computationally studied in the next few decades to come. In addition, PCA is only needed for the very rare case of a PES database that does not already include Hessian eigenvalues.

  10. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    PubMed

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  11. Managing distance and covariate information with point-based clustering.

    PubMed

    Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul

    2016-09-01

    Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.

  12. A parallel finite element simulator for ion transport through three-dimensional ion channel systems.

    PubMed

    Tu, Bin; Chen, Minxin; Xie, Yan; Zhang, Linbo; Eisenberg, Bob; Lu, Benzhuo

    2013-09-15

    A parallel finite element simulator, ichannel, is developed for ion transport through three-dimensional ion channel systems that consist of protein and membrane. The coordinates of heavy atoms of the protein are taken from the Protein Data Bank and the membrane is represented as a slab. The simulator contains two components: a parallel adaptive finite element solver for a set of Poisson-Nernst-Planck (PNP) equations that describe the electrodiffusion process of ion transport, and a mesh generation tool chain for ion channel systems, which is an essential component for the finite element computations. The finite element method has advantages in modeling irregular geometries and complex boundary conditions. We have built a tool chain to get the surface and volume mesh for ion channel systems, which consists of a set of mesh generation tools. The adaptive finite element solver in our simulator is implemented using the parallel adaptive finite element package Parallel Hierarchical Grid (PHG) developed by one of the authors, which provides the capability of doing large scale parallel computations with high parallel efficiency and the flexibility of choosing high order elements to achieve high order accuracy. The simulator is applied to a real transmembrane protein, the gramicidin A (gA) channel protein, to calculate the electrostatic potential, ion concentrations and I - V curve, with which both primitive and transformed PNP equations are studied and their numerical performances are compared. To further validate the method, we also apply the simulator to two other ion channel systems, the voltage dependent anion channel (VDAC) and α-Hemolysin (α-HL). The simulation results agree well with Brownian dynamics (BD) simulation results and experimental results. Moreover, because ionic finite size effects can be included in PNP model now, we also perform simulations using a size-modified PNP (SMPNP) model on VDAC and α-HL. It is shown that the size effects in SMPNP can effectively lead to reduced current in the channel, and the results are closer to BD simulation results. Copyright © 2013 Wiley Periodicals, Inc.

  13. Finite-mode analysis by means of intensity information in fractional optical systems.

    PubMed

    Alieva, Tatiana; Bastiaans, Martin J

    2002-03-01

    It is shown how a coherent optical signal that contains only a finite number of Hermite-Gauss modes can be reconstructed from the knowledge of its Radon-Wigner transform-associated with the intensity distribution in a fractional-Fourier-transform optical system-at only two transversal points. The proposed method can be generalized to any fractional system whose generator transform has a complete orthogonal set of eigenfunctions.

  14. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  15. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  16. Bin packing problem solution through a deterministic weighted finite automaton

    NASA Astrophysics Data System (ADS)

    Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.

    2016-06-01

    In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.

  17. Not all (possibly) “random” sequences are created equal

    PubMed Central

    Pincus, Steve; Kalman, Rudolf E.

    1997-01-01

    The need to assess the randomness of a single sequence, especially a finite sequence, is ubiquitous, yet is unaddressed by axiomatic probability theory. Here, we assess randomness via approximate entropy (ApEn), a computable measure of sequential irregularity, applicable to single sequences of both (even very short) finite and infinite length. We indicate the novelty and facility of the multidimensional viewpoint taken by ApEn, in contrast to classical measures. Furthermore and notably, for finite length, finite state sequences, one can identify maximally irregular sequences, and then apply ApEn to quantify the extent to which given sequences differ from maximal irregularity, via a set of deficit (defm) functions. The utility of these defm functions which we show allows one to considerably refine the notions of probabilistic independence and normality, is featured in several studies, including (i) digits of e, π, √2, and √3, both in base 2 and in base 10, and (ii) sequences given by fractional parts of multiples of irrationals. We prove companion analytic results, which also feature in a discussion of the role and validity of the almost sure properties from axiomatic probability theory insofar as they apply to specified sequences and sets of sequences (in the physical world). We conclude by relating the present results and perspective to both previous and subsequent studies. PMID:11038612

  18. Localization in finite vibroimpact chains: Discrete breathers and multibreathers.

    PubMed

    Grinberg, Itay; Gendelman, Oleg V

    2016-09-01

    We explore the dynamics of strongly localized periodic solutions (discrete solitons or discrete breathers) in a finite one-dimensional chain of oscillators. Localization patterns with both single and multiple localization sites (breathers and multibreathers) are considered. The model involves parabolic on-site potential with rigid constraints (the displacement domain of each particle is finite) and a linear nearest-neighbor coupling. When the particle approaches the constraint, it undergoes an inelastic impact according to Newton's impact model. The rigid nonideal impact constraints are the only source of nonlinearity and damping in the system. We demonstrate that this vibro-impact model allows derivation of exact analytic solutions for the breathers and multibreathers with an arbitrary set of localization sites, both in conservative and in forced-damped settings. Periodic boundary conditions are considered; exact solutions for other types of boundary conditions are also available. Local character of the nonlinearity permits explicit derivation of a monodromy matrix for the breather solutions. Consequently, the stability of the derived breather and multibreather solutions can be efficiently studied in the framework of simple methods of linear algebra, and with rather moderate computational efforts. One reveals that that the finiteness of the chain fragment and possible proximity of the localization sites strongly affect both the existence and the stability patterns of these localized solutions.

  19. A finite element method for solving the shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Comblen, Richard; Legrand, Sébastien; Deleersnijder, Eric; Legat, Vincent

    Within the framework of ocean general circulation modeling, the present paper describes an efficient way to discretize partial differential equations on curved surfaces by means of the finite element method on triangular meshes. Our approach benefits from the inherent flexibility of the finite element method. The key idea consists in a dialog between a local coordinate system defined for each element in which integration takes place, and a nodal coordinate system in which all local contributions related to a vectorial degree of freedom are assembled. Since each element of the mesh and each degree of freedom are treated in the same way, the so-called pole singularity issue is fully circumvented. Applied to the shallow water equations expressed in primitive variables, this new approach has been validated against the standard test set defined by [Williamson, D.L., Drake, J.B., Hack, J.J., Jakob, R., Swarztrauber, P.N., 1992. A standard test set for numerical approximations to the shallow water equations in spherical geometry. Journal of Computational Physics 102, 211-224]. Optimal rates of convergence for the P1NC-P1 finite element pair are obtained, for both global and local quantities of interest. Finally, the approach can be extended to three-dimensional thin-layer flows in a straightforward manner.

  20. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation

    PubMed Central

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840

  1. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation.

    PubMed

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.

  2. Consistent Initial Conditions for the DNS of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.; Blaisdell, G. A.

    1996-01-01

    Relationships between diverse thermodynamic quantities appropriate to weakly compressible turbulence are derived. It is shown that for turbulence of a finite turbulent Mach number there is a finite element of compressibility. A methodology for generating initial conditions for the fluctuating pressure, density and dilatational velocity is given which is consistent with finite Mach number effects. Use of these initial conditions gives rise to a smooth development of the flow, in contrast to cases in which these fields are specified arbitrarily or set to zero. Comparisons of the effect of different types of initial conditions are made using direct numerical simulation of decaying isotropic turbulence.

  3. Performance of low-rank QR approximation of the finite element Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D A; Fasenfest, B J

    2006-01-12

    We are concerned with the computation of magnetic fields from known electric currents in the finite element setting. In finite element eddy current simulations it is necessary to prescribe the magnetic field (or potential, depending upon the formulation) on the conductor boundary. In situations where the magnetic field is due to a distributed current density, the Biot-Savart law can be used, eliminating the need to mesh the nonconducting regions. Computation of the Biot-Savart law can be significantly accelerated using a low-rank QR approximation. We review the low-rank QR method and report performance on selected problems.

  4. On the explicit construction of Parisi landscapes in finite dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Y. V.; Bouchaud, J.-P.

    2007-12-01

    An N-dimensional Gaussian landscape with multiscale translation-invariant logarithmic correlations has been constructed, and the statistical mechanics of a single particle in this environment has been investigated. In the limit of a high dimensional N → ∞, the free energy of the system in the thermodynamic limit coincides with the most general version of Derrida’s generalized random energy model. The low-temperature behavior depends essentially on the spectrum of length scales involved in the construction of the landscape. The construction is argued to be valid in any finite spatial dimensions N ≥1.

  5. A Large Scale Dynamical System Immune Network Modelwith Finite Connectivity

    NASA Astrophysics Data System (ADS)

    Uezu, T.; Kadono, C.; Hatchett, J.; Coolen, A. C. C.

    We study a model of an idiotypic immune network which was introduced by N. K. Jerne. It is known that in immune systems there generally exist several kinds of immune cells which can recognize any particular antigen. Taking this fact into account and assuming that each cell interacts with only a finite number of other cells, we analyze a large scale immune network via both numerical simulations and statistical mechanical methods, and show that the distribution of the concentrations of antibodies becomes non-trivial for a range of values of the strength of the interaction and the connectivity.

  6. Quantum statistical mechanics of nonrelativistic membranes: crumpling transition at finite temperature

    NASA Astrophysics Data System (ADS)

    Borelli, M. E. S.; Kleinert, H.; Schakel, Adriaan M. J.

    2000-03-01

    The effect of quantum fluctuations on a nearly flat, nonrelativistic two-dimensional membrane with extrinsic curvature stiffness and tension is investigated. The renormalization group analysis is carried out in first-order perturbative theory. In contrast to thermal fluctuations, which soften the membrane at large scales and turn it into a crumpled surface, quantum fluctuations are found to stiffen the membrane, so that it exhibits a Hausdorff dimension equal to two. The large-scale behavior of the membrane is further studied at finite temperature, where a nontrivial fixed point is found, signaling a crumpling transition.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less

  8. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  9. Estimation of elastic moduli of graphene monolayer in lattice statics approach at nonzero temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zubko, I. Yu., E-mail: zoubko@list.ru; Kochurov, V. I.

    2015-10-27

    For the aim of the crystal temperature control the computational-statistical approach to studying thermo-mechanical properties for finite sized crystals is presented. The approach is based on the combination of the high-performance computational techniques and statistical analysis of the crystal response on external thermo-mechanical actions for specimens with the statistically small amount of atoms (for instance, nanoparticles). The heat motion of atoms is imitated in the statics approach by including the independent degrees of freedom for atoms connected with their oscillations. We obtained that under heating, graphene material response is nonsymmetric.

  10. [Application of finite element method in spinal biomechanics].

    PubMed

    Liu, Qiang; Zhang, Jun; Sun, Shu-Chun; Wang, Fei

    2017-02-25

    The finite element model is one of the most important methods in study of modern spinal biomechanics, according to the needs to simulate the various states of the spine, calculate the stress force and strain distribution of the different groups in the state, and explore its principle of mechanics, mechanism of injury, and treatment effectiveness. In addition, in the study of the pathological state of the spine, the finite element is mainly used in the understanding the mechanism of lesion location, evaluating the effects of different therapeutic tool, assisting and completing the selection and improvement of therapeutic tool, in order to provide a theoretical basis for the rehabilitation of spinal lesions. Finite element method can be more provide the service for the patients suffering from spinal correction, operation and individual implant design. Among the design and performance evaluation of the implant need to pay attention to the individual difference and perfect the evaluation system. At present, how to establish a model which is more close to the real situation has been the focus and difficulty of the study of human body's finite element.Although finite element method can better simulate complex working condition, it is necessary to improve the authenticity of the model and the sharing of the group by using many kinds of methods, such as image science, statistics, kinematics and so on. Copyright© 2017 by the China Journal of Orthopaedics and Traumatology Press.

  11. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  12. A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain

    NASA Astrophysics Data System (ADS)

    Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.

    2018-05-01

    The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.

  13. Global synchronization in finite time for fractional-order neural networks with discontinuous activations and time delays.

    PubMed

    Peng, Xiao; Wu, Huaiqin; Song, Ka; Shi, Jiaxin

    2017-10-01

    This paper is concerned with the global Mittag-Leffler synchronization and the synchronization in finite time for fractional-order neural networks (FNNs) with discontinuous activations and time delays. Firstly, the properties with respect to Mittag-Leffler convergence and convergence in finite time, which play a critical role in the investigation of the global synchronization of FNNs, are developed, respectively. Secondly, the novel state-feedback controller, which includes time delays and discontinuous factors, is designed to realize the synchronization goal. By applying the fractional differential inclusion theory, inequality analysis technique and the proposed convergence properties, the sufficient conditions to achieve the global Mittag-Leffler synchronization and the synchronization in finite time are addressed in terms of linear matrix inequalities (LMIs). In addition, the upper bound of the setting time of the global synchronization in finite time is explicitly evaluated. Finally, two examples are given to demonstrate the validity of the proposed design method and theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  15. The GPRIME approach to finite element modeling

    NASA Technical Reports Server (NTRS)

    Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.

    1983-01-01

    GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.

  16. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  17. Social Networks: Rational Learning and Information Aggregation

    DTIC Science & Technology

    2009-09-01

    most closely related to our work in this genre are Bala and Goyal (1998, 2001), DeMarzo , Vayanos and Zwiebel (2003) and Golub and Jackson (2007). These...observations. DeMarzo , Vayanos and Zwiebel and Golub and Jackson also study similar environ- ments and derive consensus-type results, whereby individuals in the...Cover T., “Hypothesis Testing with Finite Statistics,” The Annals of Mathematical Statistics, vol. 40, no. 3, pp. 828-835, 1969. [18] DeMarzo P.M

  18. Finite element model updating and damage detection for bridges using vibration measurement.

    DOT National Transportation Integrated Search

    2013-12-01

    In this report, the results of a study on developing a damage detection methodology based on Statistical Pattern Recognition are : presented. This methodology uses a new damage sensitive feature developed in this study that relies entirely on modal :...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauffer, D.

    After improving the Monte Carlo statistics, Derrida`s exponent {theta} for the never damaged sites in Ising models at finite temperatures is shown to be compatible with {theta}(T = 0) = 0.22 in two dimensions. In three and more dimensions the number of these sites decays differently from zero temperature

  20. A combinatorial approach to the design of vaccines.

    PubMed

    Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M

    2015-05-01

    We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.

  1. Ice Shape Characterization Using Self-Organizing Maps

    NASA Technical Reports Server (NTRS)

    McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.

    2011-01-01

    A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.

  2. a Statistical Theory of the Epilepsies.

    NASA Astrophysics Data System (ADS)

    Thomas, Kuryan

    1988-12-01

    A new physical and mathematical model for the epilepsies is proposed, based on the theory of bond percolation on finite lattices. Within this model, the onset of seizures in the brain is identified with the appearance of spanning clusters of neurons engaged in the spurious and uncontrollable electrical activity characteristic of seizures. It is proposed that the fraction of excitatory to inhibitory synapses can be identified with a bond probability, and that the bond probability is a randomly varying quantity displaying Gaussian statistics. The consequences of the proposed model to the treatment of the epilepsies is explored. The nature of the data on the epilepsies which can be acquired in a clinical setting is described. It is shown that such data can be analyzed to provide preliminary support for the bond percolation hypothesis, and to quantify the efficacy of anti-epileptic drugs in a treatment program. The results of a battery of statistical tests on seizure distributions are discussed. The physical theory of the electroencephalogram (EEG) is described, and extant models of the electrical activity measured by the EEG are discussed, with an emphasis on their physical behavior. A proposal is made to explain the difference between the power spectra of electrical activity measured with cranial probes and with the EEG. Statistical tests on the characteristic EEG manifestations of epileptic activity are conducted, and their results described. Computer simulations of a correlated bond percolating system are constructed. It is shown that the statistical properties of the results of such a simulation are strongly suggestive of the statistical properties of clinical data. The study finds no contradictions between the predictions of the bond percolation model and the observed properties of the available data. Suggestions are made for further research and for techniques based on the proposed model which may be used for tuning the effects of anti -epileptic drugs.

  3. HiVy automated translation of stateflow designs for model checking verification

    NASA Technical Reports Server (NTRS)

    Pingree, Paula

    2003-01-01

    tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.

  4. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  5. The Ablowitz–Ladik system on a finite set of integers

    NASA Astrophysics Data System (ADS)

    Xia, Baoqiang

    2018-07-01

    We show how to solve initial-boundary value problems for integrable nonlinear differential–difference equations on a finite set of integers. The method we employ is the discrete analogue of the unified transform (Fokas method). The implementation of this method to the Ablowitz–Ladik system yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem, which has a jump matrix with explicit -dependence involving certain functions referred to as spectral functions. Some of these functions are defined in terms of the initial value, while the remaining spectral functions are defined in terms of two sets of boundary values. These spectral functions are not independent but satisfy an algebraic relation called global relation. We analyze the global relation to characterize the unknown boundary values in terms of the given initial and boundary values. We also discuss the linearizable boundary conditions.

  6. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  7. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  8. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  9. Modal Substructuring of Geometrically Nonlinear Finite Element Models with Interface Reduction

    DOE PAGES

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    2017-03-29

    Substructuring methods have been widely used in structural dynamics to divide large, complicated finite element models into smaller substructures. For linear systems, many methods have been developed to reduce the subcomponents down to a low order set of equations using a special set of component modes, and these are then assembled to approximate the dynamics of a large scale model. In this paper, a substructuring approach is developed for coupling geometrically nonlinear structures, where each subcomponent is drastically reduced to a low order set of nonlinear equations using a truncated set of fixedinterface and characteristic constraint modes. The method usedmore » to extract the coefficients of the nonlinear reduced order model (NLROM) is non-intrusive in that it does not require any modification to the commercial FEA code, but computes the NLROM from the results of several nonlinear static analyses. The NLROMs are then assembled to approximate the nonlinear differential equations of the global assembly. The method is demonstrated on the coupling of two geometrically nonlinear plates with simple supports at all edges. The plates are joined at a continuous interface through the rotational degrees-of-freedom (DOF), and the nonlinear normal modes (NNMs) of the assembled equations are computed to validate the models. The proposed substructuring approach reduces a 12,861 DOF nonlinear finite element model down to only 23 DOF, while still accurately reproducing the first three NNMs of the full order model.« less

  10. The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Wen; Yang, Zhaoqing; Copping, Andrea E.

    : As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3Dmore » sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.« less

  11. [Progression on finite element modeling method in scoliosis].

    PubMed

    Fan, Ning; Zang, Lei; Hai, Yong; Du, Peng; Yuan, Shuo

    2018-04-25

    Scoliosis is a complex spinal three-dimensional malformation with complicated pathogenesis, often associated with complications as thoracic deformity and shoulder imbalance. Because the acquisition of specimen or animal models are difficult, the biomechanical study of scoliosis is limited. In recent years, along with the development of the computer technology, software and image, the technology of establishing a finite element model of human spine is maturing and it has been providing strong support for the research of pathogenesis of scoliosis, the design and application of brace, and the selection of surgical methods. The finite element model method is gradually becoming an important tool in the biomechanical study of scoliosis. Establishing a high quality finite element model is the basis of analysis and future study. However, the finite element modeling process can be complex and modeling methods are greatly varied. Choosing the appropriate modeling method according to research objectives has become researchers' primary task. In this paper, the author reviews the national and international literature in recent years and concludes the finite element modeling methods in scoliosis, including data acquisition, establishment of the geometric model, the material properties, parameters setting, the validity of the finite element model validation and so on. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  12. Air Vehicles Division Computational Structural Analysis Facilities Policy and Guidelines for Users

    DTIC Science & Technology

    2005-05-01

    34 Thermal " as appropriate and the tolerance set to "default". b) Create the model geometry. c) Create the finite elements. d) Create the...linear, non-linear, dynamic, thermal , acoustic analysis. The modelling of composite materials, creep, fatigue and plasticity are also covered...perform professional, high quality finite element analysis (FEA). FE analysts from many tasks within AVD are using the facilities to conduct FEA with

  13. Dispersion-relation-preserving finite difference schemes for computational acoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1993-01-01

    Time-marching dispersion-relation-preserving (DRP) schemes can be constructed by optimizing the finite difference approximations of the space and time derivatives in wave number and frequency space. A set of radiation and outflow boundary conditions compatible with the DRP schemes is constructed, and a sequence of numerical simulations is conducted to test the effectiveness of the DRP schemes and the radiation and outflow boundary conditions. Close agreement with the exact solutions is obtained.

  14. A comparison of two finite element models of tidal hydrodynamics using a North Sea data set

    USGS Publications Warehouse

    Walters, R.A.; Werner, F.E.

    1989-01-01

    Using the region of the English Channel and the southern bight of the North Sea, we systematically compare the results of two independent finite element models of tidal hydrodynamics. The model intercomparison provides a means for increasing our understanding of the relevant physical processes in the region in question as well as a means for the evaluation of certain algorithmic procedures of the two models. ?? 1989.

  15. Elastic Behavior of a Rubber Layer Bonded between Two Rigid Spheres.

    DTIC Science & Technology

    1988-05-01

    Cracking, Composites, Compressibility, Def ormition, Dilatancy, Elasticity, Elastomers , Failure, Fracture, Particle ’,-1tr1f6rcement, Rubber, Stress...Analysis. 2.AITRACT (Ca~mmi ON VOW...lds It 񔨾Y MtE fIdnt & bp04 bo ambwe - Finite element methods ( FEM ) have been employed to calculate the stresses...deformations set up by compression or extension of the layer, using finite element methods ( FEM ) and not invoking the condition of incompressibility

  16. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  17. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  18. Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.

    1988-01-01

    An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.

  19. An Independent Filter for Gene Set Testing Based on Spectral Enrichment.

    PubMed

    Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H

    2015-01-01

    Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.

  20. Statistics of gravitational lenses - The uncertainties

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1991-01-01

    The assumptions in the analysis of gravitational lensing statistics are examined. Special emphasis is given to the uncertainties in the theoretical predictions. It is shown that a simple redshift cutoff model, which may result from galaxy evolution, can significantly reduce the lensing probability and explain the large mean separation of images in observed gravitational lenses. This effect may affect the constraint on the contribution of the cosmological constant to producing a flat universe from the number counts of the observed lenses. For the Omega(0) = 1 (filled beam) model, the lensing probability of early-type galaxies with finite core radii is reduced roughly by a factor of 2 for high-redshift quasars as compared with the corresponding singular isothermal sphere model. The finite core radius effect is about 20 percent for a lambda-dominated flat universe. It is also shown that the most recent galaxy luminosity function gives lensing probabilities that are smaller than previously estimated roughly by a factor of 3.

  1. Unifying quantum heat transfer in a nonequilibrium spin-boson model with full counting statistics

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Ren, Jie; Cao, Jianshu

    2017-02-01

    To study the full counting statistics of quantum heat transfer in a driven nonequilibrium spin-boson model, we develop a generalized nonequilibrium polaron-transformed Redfield equation with an auxiliary counting field. This enables us to study the impact of qubit-bath coupling ranging from weak to strong regimes. Without external modulations, we observe maximal values of both steady-state heat flux and noise power in moderate coupling regimes, below which we find that these two transport quantities are enhanced by the finite-qubit-energy bias. With external modulations, the geometric-phase-induced heat flux shows a monotonic decrease upon increasing the qubit-bath coupling at zero qubit energy bias (without bias). While under the finite-qubit-energy bias (with bias), the geometric-phase-induced heat flux exhibits an interesting reversal behavior in the strong coupling regime. Our results unify the seemingly contradictory results in weak and strong qubit-bath coupling regimes and provide detailed dissections for the quantum fluctuation of nonequilibrium heat transfer.

  2. Scaling in biomechanical experimentation: a finite similitude approach.

    PubMed

    Ochoa-Cabrero, Raul; Alonso-Rasgado, Teresa; Davey, Keith

    2018-06-01

    Biological experimentation has many obstacles: resource limitations, unavailability of materials, manufacturing complexities and ethical compliance issues; any approach that resolves all or some of these is of some interest. The aim of this study is applying the recently discovered concept of finite similitude as a novel approach for the design of scaled biomechanical experiments supported with analysis using a commercial finite-element package and validated by means of image correlation software. The study of isotropic scaling of synthetic bones leads to the selection of three-dimensional (3D) printed materials for the trial-space materials. These materials conforming to the theory are analysed in finite-element models of a cylinder and femur geometries undergoing compression, tension, torsion and bending tests to assess the efficacy of the approach using reverse scaling of the approach. The finite-element results show similar strain patterns in the surface for the cylinder with a maximum difference of less than 10% and for the femur with a maximum difference of less than 4% across all tests. Finally, the trial-space, physical-trial experimentation using 3D printed materials for compression and bending testing provides a good agreement in a Bland-Altman statistical analysis, providing good supporting evidence for the practicality of the approach. © 2018 The Author(s).

  3. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  4. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  5. Accuracy of topological entanglement entropy on finite cylinders.

    PubMed

    Jiang, Hong-Chen; Singh, Rajiv R P; Balents, Leon

    2013-09-06

    Topological phases are unique states of matter which support nonlocal excitations which behave as particles with fractional statistics. A universal characterization of gapped topological phases is provided by the topological entanglement entropy (TEE). We study the finite size corrections to the TEE by focusing on systems with a Z2 topological ordered state using density-matrix renormalization group and perturbative series expansions. We find that extrapolations of the TEE based on the Renyi entropies with a Renyi index of n≥2 suffer from much larger finite size corrections than do extrapolations based on the von Neumann entropy. In particular, when the circumference of the cylinder is about ten times the correlation length, the TEE obtained using von Neumann entropy has an error of order 10(-3), while for Renyi entropies it can even exceed 40%. We discuss the relevance of these findings to previous and future searches for topological ordered phases, including quantum spin liquids.

  6. Identification of PARMA Models and Their Application to the Modeling of River flows

    NASA Astrophysics Data System (ADS)

    Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.

    2004-05-01

    The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is essential to the planning, design and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. We employ a periodic ARMA (PARMA) model. The innovation algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Anderson and Meerschaert (2003) provide a method for model identification when the time series has finite fourth moment. This article, an extension of the previous work by Anderson and Meerschaert, demonstrates the effectiveness of the technique using simulated data. An application to monthly flow data for the Frazier River in British Columbia is also included to illustrate the use of these methods.

  7. Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems

    NASA Astrophysics Data System (ADS)

    Yang, Ge; Wang, Jun; Fang, Wen

    2015-04-01

    In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also defined in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.

  8. [Analysis of the movement of long axis and the distribution of principal stress in abutment tooth retained by conical telescope].

    PubMed

    Lin, Ying-he; Man, Yi; Qu, Yi-li; Guan, Dong-hua; Lu, Xuan; Wei, Na

    2006-01-01

    To study the movement of long axis and the distribution of principal stress in the abutment teeth in removable partial denture which is retained by use of conical telescope. An ideal three dimensional finite element model was constructed by using SCT image reconstruction technique, self-programming and ANSYS software. The static loads were applied. The displacement of the long axis and the distribution of the principal stress in the abutment teeth was analyzed. There is no statistic difference of displacenat and stress distribution among different three-dimensional finite element models. Generally, the abutment teeth move along the long axis itself. Similar stress distribution was observed in each three-dimensional finite element model. The maximal principal compressive stress was observed at the distal cervix of the second premolar. The abutment teeth can be well protected by use of conical telescope.

  9. Sufficient condition for a finite-time singularity in a high-symmetry Euler flow: Analysis and statistics

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Bhattacharjee, A.

    1996-08-01

    A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.

  10. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  11. Local existence of solutions to the Euler-Poisson system, including densities without compact support

    NASA Astrophysics Data System (ADS)

    Brauer, Uwe; Karp, Lavi

    2018-01-01

    Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.

  12. Eshelby's problem of a spherical inclusion eccentrically embedded in a finite spherical body

    PubMed Central

    He, Q.-C.

    2017-01-01

    Resorting to the superposition principle, the solution of Eshelby's problem of a spherical inclusion located eccentrically inside a finite spherical domain is obtained in two steps: (i) the solution to the problem of a spherical inclusion in an infinite space; (ii) the solution to the auxiliary problem of the corresponding finite spherical domain subjected to appropriate boundary conditions. Moreover, a set of functions called the sectional and harmonic deviators are proposed and developed to work out the auxiliary solution in a series form, including the displacement and Eshelby tensor fields. The analytical solutions are explicitly obtained and illustrated when the geometric and physical parameters and the boundary condition are specified. PMID:28293141

  13. Strongly magnetized classical plasma models

    NASA Technical Reports Server (NTRS)

    Montgomery, D. C.

    1972-01-01

    The class of plasma processes for which the so-called Vlasov approximation is inadequate is investigated. Results from the equilibrium statistical mechanics of two-dimensional plasmas are derived. These results are independent of the presence of an external dc magnetic field. The nonequilibrium statistical mechanics of the electrostatic guiding-center plasma, a two-dimensional plasma model, is discussed. This model is then generalized to three dimensions. The guiding-center model is relaxed to include finite Larmor radius effects for a two-dimensional plasma.

  14. Signatures of fractional exclusion statistics in the spectroscopy of quantum Hall droplets.

    PubMed

    Cooper, Nigel R; Simon, Steven H

    2015-03-13

    We show how spectroscopic experiments on a small Laughlin droplet of rotating bosons can directly demonstrate Haldane fractional exclusion statistics of quasihole excitations. The characteristic signatures appear in the single-particle excitation spectrum. We show that the transitions are governed by a "many-body selection rule" which allows one to relate the number of allowed transitions to the number of quasihole states on a finite geometry. We illustrate the theory with numerically exact simulations of small numbers of particles.

  15. Modified two-sources quantum statistical model and multiplicity fluctuation in the finite rapidity region

    NASA Astrophysics Data System (ADS)

    Ghosh, Dipak; Sarkar, Sharmila; Sen, Sanjib; Roy, Jaya

    1995-06-01

    In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,'' has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model'' as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of 16Ag/Br and 24Ag/Br interactions at a few tens of GeV energy regime.

  16. Modal Parameter Identification and Numerical Simulation for Self-anchored Suspension Bridges Based on Ambient Vibration

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Sun, Li Guo

    2018-06-01

    This paper chooses the Nanjing-Hangzhou high speed overbridge, a self-anchored suspension bridge, as the research target, trying to identify the dynamic characteristic parameters of the bridge by using the peak-picking method to analyze the velocity response data under ambient excitation collected by 7 vibration pickup sensors set on the bridge deck. The ABAQUS is used to set up a three-dimensional finite element model for the full bridge and amends the finite element model of the suspension bridge based on the identified modal parameter, and suspender force picked by the PDV100 laser vibrometer. The study shows that the modal parameter can well be identified by analyzing the bridge vibration velocity collected by 7 survey points. The identified modal parameter and measured suspender force can be used as the basis of the amendment of the finite element model of the suspension bridge. The amended model can truthfully reflect the structural physical features and it can also be the benchmark model for the long-term health monitoring and condition assessment of the bridge.

  17. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  18. Numerical simulation of temperature field in K9 glass irradiated by ultraviolet pulse laser

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Fang, Xiaodong

    2015-10-01

    The optical component of photoelectric system was easy to be damaged by irradiation of high power pulse laser, so the effect of high power pulse laser irradiation on K9 glass was researched. A thermodynamic model of K9 glass irradiated by ultraviolet pulse laser was established using the finite element software ANSYS. The article analyzed some key problems in simulation process of ultraviolet pulse laser damage of K9 glass based on ANSYS from the finite element models foundation, meshing, loading of pulse laser, setting initial conditions and boundary conditions and setting the thermal physical parameters of material. The finite element method (FEM) model was established and a numerical analysis was performed to calculate temperature field in K9 glass irradiated by ultraviolet pulse laser. The simulation results showed that the temperature of irradiation area exceeded the melting point of K9 glass, while the incident laser energy was low. The thermal damage dominated in the damage mechanism of K9 glass, the melting phenomenon should be much more distinct.

  19. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    NASA Astrophysics Data System (ADS)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  20. Stabilization of a locally minimal forest

    NASA Astrophysics Data System (ADS)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  1. Exact finite volume expectation values of \\overline{Ψ}Ψ in the massive Thirring model from light-cone lattice correlators

    NASA Astrophysics Data System (ADS)

    Hegedűs, Árpád

    2018-03-01

    In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.

  2. Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam

    NASA Astrophysics Data System (ADS)

    Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa

    2017-08-01

    In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.

  3. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  4. Joint Target Detection and Tracking Filter for Chilbolton Advanced Meteorological Radar Data Processing

    NASA Astrophysics Data System (ADS)

    Pak, A.; Correa, J.; Adams, M.; Clark, D.; Delande, E.; Houssineau, J.; Franco, J.; Frueh, C.

    2016-09-01

    Recently, the growing number of inactive Resident Space Objects (RSOs), or space debris, has provoked increased interest in the field of Space Situational Awareness (SSA) and various investigations of new methods for orbital object tracking. In comparison with conventional tracking scenarios, state estimation of an orbiting object entails additional challenges, such as orbit determination and orbital state and covariance propagation in the presence of highly nonlinear system dynamics. The sensors which are available for detecting and tracking space debris are prone to multiple clutter measurements. Added to this problem, is the fact that it is unknown whether or not a space debris type target is present within such sensor measurements. Under these circumstances, traditional single-target filtering solutions such as Kalman Filters fail to produce useful trajectory estimates. The recent Random Finite Set (RFS) based Finite Set Statistical (FISST) framework has yielded filters which are more appropriate for such situations. The RFS based Joint Target Detection and Tracking (JoTT) filter, also known as the Bernoulli filter, is a single target, multiple measurements filter capable of dealing with cluttered and time-varying backgrounds as well as modeling target appearance and disappearance in the scene. Therefore, this paper presents the application of the Gaussian mixture-based JoTT filter for processing measurements from Chilbolton Advanced Meteorological Radar (CAMRa) which contain both defunct and operational satellites. The CAMRa is a fully-steerable radar located in southern England, which was recently modified to be used as a tracking asset in the European Space Agency SSA program. The experiments conducted show promising results regarding the capability of such filters in processing cluttered radar data. The work carried out in this paper was funded by the USAF Grant No. FA9550-15-1-0069, Chilean Conicyt - Fondecyt grant number 1150930, EU Erasmus Mundus MSc Scholarship, Defense Science and Technology Laboratory (DSTL), U. K., and the Chilean Conicyt, Fondecyt project grant number 1150930.

  5. A study of the response of nonlinear springs

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Knott, T. W.; Johnson, E. R.

    1991-01-01

    The various phases to developing a methodology for studying the response of a spring-reinforced arch subjected to a point load are discussed. The arch is simply supported at its ends with both the spring and the point load assumed to be at midspan. The spring is present to off-set the typical snap through behavior normally associated with arches, and to provide a structure that responds with constant resistance over a finite displacement. The various phases discussed consist of the following: (1) development of the closed-form solution for the shallow arch case; (2) development of a finite difference analysis to study (shallow) arches; and (3) development of a finite element analysis for studying more general shallow and nonshallow arches. The two numerical analyses rely on a continuation scheme to move the solution past limit points, and to move onto bifurcated paths, both characteristics being common to the arch problem. An eigenvalue method is used for a continuation scheme. The finite difference analysis is based on a mixed formulation (force and displacement variables) of the governing equations. The governing equations for the mixed formulation are in first order form, making the finite difference implementation convenient. However, the mixed formulation is not well-suited for the eigenvalue continuation scheme. This provided the motivation for the displacement based finite element analysis. Both the finite difference and the finite element analyses are compared with the closed form shallow arch solution. Agreement is excellent, except for the potential problems with the finite difference analysis and the continuation scheme. Agreement between the finite element analysis and another investigator's numerical analysis for deep arches is also good.

  6. Annual Review of Research Under the Joint Services Electronics Program.

    DTIC Science & Technology

    1983-12-01

    Total Number of Professionals: PI 2 RA 2 (1/2 time ) 6. Sunmmary: Our research into the theory of nonlinear control systems and appli- * cations to...known that all linear time -invariant controllable systems can be transformed to Brunovsky canonical form by a transformation consist- ing only of...estimating the impulse response ( = transfer matrix) of a discrete- time linear system x(t+l) = Fx(t) + Gu(t) y(t) = Hx(t) from a finite set of finite

  7. On certain families of rational functions arising in dynamics

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1979-01-01

    It is noted that linear systems, depending on parameters, can occur in diverse situations including families of rational solutions to the Korteweg-de Vries equation or to the finite Toda lattice. The inverse scattering method used by Moser (1975) to obtain canonical coordinates for the finite homogeneous Toda lattice can be used for the synthesis of RC networks. It is concluded that the multivariable RC setting is ideal for the analysis of the periodic Toda lattice.

  8. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  9. Nuclear Deformation at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Alhassid, Y.; Gilbreth, C. N.; Bertsch, G. F.

    2014-12-01

    Deformation, a key concept in our understanding of heavy nuclei, is based on a mean-field description that breaks the rotational invariance of the nuclear many-body Hamiltonian. We present a method to analyze nuclear deformations at finite temperature in a framework that preserves rotational invariance. The auxiliary-field Monte Carlo method is used to generate a statistical ensemble and calculate the probability distribution associated with the quadrupole operator. Applying the technique to nuclei in the rare-earth region, we identify model-independent signatures of deformation and find that deformation effects persist to temperatures higher than the spherical-to-deformed shape phase-transition temperature of mean-field theory.

  10. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  11. Matching a Distribution by Matching Quantiles Estimation

    PubMed Central

    Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia

    2015-01-01

    Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592

  12. Position-momentum uncertainty relations in the presence of quantum memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furrer, Fabian, E-mail: furrer@eve.phys.s.u-tokyo.ac.jp; Berta, Mario; Institute for Theoretical Physics, ETH Zurich, Wolfgang-Pauli-Str. 27, 8093 Zürich

    2014-12-15

    A prominent formulation of the uncertainty principle identifies the fundamental quantum feature that no particle may be prepared with certain outcomes for both position and momentum measurements. Often the statistical uncertainties are thereby measured in terms of entropies providing a clear operational interpretation in information theory and cryptography. Recently, entropic uncertainty relations have been used to show that the uncertainty can be reduced in the presence of entanglement and to prove security of quantum cryptographic tasks. However, much of this recent progress has been focused on observables with only a finite number of outcomes not including Heisenberg’s original setting ofmore » position and momentum observables. Here, we show entropic uncertainty relations for general observables with discrete but infinite or continuous spectrum that take into account the power of an entangled observer. As an illustration, we evaluate the uncertainty relations for position and momentum measurements, which is operationally significant in that it implies security of a quantum key distribution scheme based on homodyne detection of squeezed Gaussian states.« less

  13. A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking

    PubMed Central

    Wang, Xuedong; Sun, Shudong; Corchado, Juan M.

    2017-01-01

    We review some advances of the particle filtering (PF) algorithm that have been achieved in the last decade in the context of target tracking, with regard to either a single target or multiple targets in the presence of false or missing data. The first part of our review is on remarkable achievements that have been made for the single-target PF from several aspects including importance proposal, computing efficiency, particle degeneracy/impoverishment and constrained/multi-modal systems. The second part of our review is on analyzing the intractable challenges raised within the general multitarget (multi-sensor) tracking due to random target birth and termination, false alarm, misdetection, measurement-to-track (M2T) uncertainty and track uncertainty. The mainstream multitarget PF approaches consist of two main classes, one based on M2T association approaches and the other not such as the finite set statistics-based PF. In either case, significant challenges remain due to unknown tracking scenarios and integrated tracking management. PMID:29168772

  14. A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.

    PubMed

    Consiglio, J D; Shan, G; Wilding, G E

    2014-01-01

    Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.

  15. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  16. B{sub K} with two flavors of dynamical overlap fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoki, S.; Riken BNL Research Center, Brookhaven National Laboratory, Upton, New York 11973; Fukaya, H.

    2008-05-01

    We present a two-flavor QCD calculation of B{sub K} on a 16{sup 3}x32 lattice at a{approx}0.12 fm (or equivalently a{sup -1}=1.67 GeV). Both valence and sea quarks are described by the overlap fermion formulation. The matching factor is calculated nonperturbatively with the so-called RI/MOM scheme. We find that the lattice data are well described by the next-to-leading order (NLO) partially quenched chiral perturbation theory (PQChPT) up to around a half of the strange quark mass (m{sub s}{sup phys}/2). The data at quark masses heavier than m{sub s}{sup phys}/2 are fitted including a part of next-to-next-to-leading order terms. We obtain B{submore » K}{sup MS}(2 GeV)=0.537(4)(40), where the first error is statistical and the second is an estimate of systematic uncertainties from finite volume, fixing topology, the matching factor, and the scale setting.« less

  17. Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch

    2015-12-15

    The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less

  18. The search for permanent electric dipole moments, in particular for the one of the neutron

    ScienceCinema

    Kirch, Klaus

    2018-05-04

    Nonzero permanent electric dipole moments (EDM) of fundamental systems like particles, nuclei, atoms or molecules violate parity and time reversal invariance. Invoking the CPT theorem, time reversal violation implies CP violation. Although CP-violation is implemented in the standard electro-weak theory, EDM generated this way remain undetectably small. However, this CP-violation also appears to fail explaining the observed baryon asymmetry of our universe. Extensions of the standard theory usually include new CP violating phases which often lead to the prediciton of larger EDM. EDM searches in different systems are complementary and various efforts worldwide are underway, but no finite value could be established yet. An improved search for the EDM of the neutron requires, among other things, much better statistics. At PSI, we are presently commissioning a new high intensity source of ultracold neutrons. At the same time, with an international collaboration, we are setting up for a new measurement of the neutron EDM which is starting this year.

  19. Identification and Simulation of Subsurface Soil patterns using hidden Markov random fields and remote sensing and geophysical EMI data sets

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Wellmann, Florian; Verweij, Elizabeth; von Hebel, Christian; van der Kruk, Jan

    2017-04-01

    Lateral and vertical spatial heterogeneity of subsurface properties such as soil texture and structure influences the available water and resource supply for crop growth. High-resolution mapping of subsurface structures using non-invasive geo-referenced geophysical measurements, like electromagnetic induction (EMI), enables a characterization of 3D soil structures, which have shown correlations to remote sensing information of the crop states. The benefit of EMI is that it can return 3D subsurface information, however the spatial dimensions are limited due to the labor intensive measurement procedure. Although active and passive sensors mounted on air- or space-borne platforms return 2D images, they have much larger spatial dimensions. Combining both approaches provides us with a potential pathway to extend the detailed 3D geophysical information to a larger area by using remote sensing information. In this study, we aim at extracting and providing insights into the spatial and statistical correlation of the geophysical and remote sensing observations of the soil/vegetation continuum system. To this end, two key points need to be addressed: 1) how to detect and recognize the geometric patterns (i.e., spatial heterogeneity) from multiple data sets, and 2) how to quantitatively describe the statistical correlation between remote sensing information and geophysical measurements. In the current study, the spatial domain is restricted to shallow depths up to 3 meters, and the geostatistical database contains normalized difference vegetation index (NDVI) derived from RapidEye satellite images and apparent electrical conductivities (ECa) measured from multi-receiver EMI sensors for nine depths of exploration ranging from 0-2.7 m. The integrated data sets are mapped into both the physical space (i.e. the spatial domain) and feature space (i.e. a two-dimensional space framed by the NDVI and the ECa data). Hidden Markov Random Fields (HMRF) are employed to model the underlying heterogeneities in spatial domain and finite Gaussian mixture models are adopted to quantitatively describe the statistical patterns in terms of center vectors and covariance matrices in feature space. A recently developed parallel stochastic clustering algorithm is adopted to implement the HMRF models and the Markov chain Monte Carlo based Bayesian inference. Certain spatial patterns such as buried paleo-river channels covered by shallow sediments are investigated as typical examples. The results indicate that the geometric patterns of the subsurface heterogeneity can be represented and quantitatively characterized by HMRF. Furthermore, the statistical patterns of the NDVI and the EMI data from the soil/vegetation-continuum system can be inferred and analyzed in a quantitative manner.

  20. Autoregressive modeling for the spectral analysis of oceanographic data

    NASA Technical Reports Server (NTRS)

    Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.

    1989-01-01

    Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.

  1. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<

  2. A Hybrid Numerical Analysis Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Staroselsky, Alexander

    2001-01-01

    A new hybrid surface-integral-finite-element numerical scheme has been developed to model a three-dimensional crack propagating through a thin, multi-layered coating. The finite element method was used to model the physical state of the coating (far field), and the surface integral method was used to model the fatigue crack growth. The two formulations are coupled through the need to satisfy boundary conditions on the crack surface and the external boundary. The coupling is sufficiently weak that the surface integral mesh of the crack surface and the finite element mesh of the uncracked volume can be set up independently. Thus when modeling crack growth, the finite element mesh can remain fixed for the duration of the simulation as the crack mesh is advanced. This method was implemented to evaluate the feasibility of fabricating a structural health monitoring system for real-time detection of surface cracks propagating in engine components. In this work, the authors formulate the hybrid surface-integral-finite-element method and discuss the mechanical issues of implementing a structural health monitoring system in an aircraft engine environment.

  3. A particle finite element method for machining simulations

    NASA Astrophysics Data System (ADS)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  4. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  5. A Least-Squares-Based Weak Galerkin Finite Element Method for Second Order Elliptic Equations

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2017-08-17

    Here, in this article, we introduce a least-squares-based weak Galerkin finite element method for the second order elliptic equation. This new method is shown to provide very accurate numerical approximations for both the primal and the flux variables. In contrast to other existing least-squares finite element methods, this new method allows us to use discontinuous approximating functions on finite element partitions consisting of arbitrary polygon/polyhedron shapes. We also develop a Schur complement algorithm for the resulting discretization problem by eliminating all the unknowns that represent the solution information in the interior of each element. Optimal order error estimates for bothmore » the primal and the flux variables are established. An extensive set of numerical experiments are conducted to demonstrate the robustness, reliability, flexibility, and accuracy of the least-squares-based weak Galerkin finite element method. Finally, the numerical examples cover a wide range of applied problems, including singularly perturbed reaction-diffusion equations and the flow of fluid in porous media with strong anisotropy and heterogeneity.« less

  6. Finite Element Analysis of Tube Hydroforming in Non-Symmetrical Dies

    NASA Astrophysics Data System (ADS)

    Nulkar, Abhishek V.; Gu, Randy; Murty, Pilaka

    2011-08-01

    Tube hydroforming has been studied intensively using commercial finite element programs. A great deal of the investigations dealt with models with symmetric cross-sections. It is known that additional constraints due to symmetry may be imposed on the model so that it is properly supported. For a non-symmetric model, these constraints become invalid and the model does not have sufficient support resulting in a singular finite element system. Majority of commercial codes have a limited capability in solving models with insufficient supports. Recently, new algorithms using penalty variable and air-like contact element (ALCE) have been developed to solve positive semi-definite finite element systems such as those in contact mechanics. In this study the ALCE algorithm is first validated by comparing its result against a commercial code using a symmetric model in which a circular tube is formed to polygonal dies with symmetric shapes. Then, the study investigates the accuracy and efficiency of using ALCE in analyzing hydroforming of tubes with various cross-sections in non-symmetrical dies in 2-D finite element settings.

  7. The Acquisition of Inflection: A Parameter-Setting Approach

    ERIC Educational Resources Information Center

    Hyams, Nina

    2008-01-01

    First written in 1986, prior to the many findings concerning the optionality of finiteness and the root infinitive phenomenon, this article attempts to extend the parameter-setting model of grammatical development to the acquisition of inflectional morphology. I propose that the Stem Parameter, which states that a stem is/is not a well-formed word…

  8. Many Denjoy minimal sets for monotone recurrence relations

    NASA Astrophysics Data System (ADS)

    Wang, Ya-Nan; Qin, Wen-Xin

    2014-09-01

    We extend Mather's work (1985 Comment. Math. Helv. 60 508-57) to high-dimensional cylinder maps defined by monotone recurrence relations, e.g. the generalized Frenkel-Kontorova model with finite range interactions. We construct uncountably many Denjoy minimal sets provided that the Birkhoff minimizers with some irrational rotation number ω do not form a foliation.

  9. On the entanglement entropy of quantum fields in causal sets

    NASA Astrophysics Data System (ADS)

    Belenchia, Alessio; Benincasa, Dionigi M. T.; Letizia, Marco; Liberati, Stefano

    2018-04-01

    In order to understand the detailed mechanism by which a fundamental discreteness can provide a finite entanglement entropy, we consider the entanglement entropy of two classes of free massless scalar fields on causal sets that are well approximated by causal diamonds in Minkowski spacetime of dimensions 2, 3 and 4. The first class is defined from discretised versions of the continuum retarded Green functions, while the second uses the causal set’s retarded nonlocal d’Alembertians parametrised by a length scale l k . In both cases we provide numerical evidence that the area law is recovered when the double-cutoff prescription proposed in Sorkin and Yazdi (2016 Entanglement entropy in causal set theory (arXiv:1611.10281)) is imposed. We discuss in detail the need for this double cutoff by studying the effect of two cutoffs on the quantum field and, in particular, on the entanglement entropy, in isolation. In so doing, we get a novel interpretation for why these two cutoff are necessary, and the different roles they play in making the entanglement entropy on causal sets finite.

  10. Finite Nuclei in the Quark-Meson Coupling Model.

    PubMed

    Stone, J R; Guichon, P A M; Reinhard, P G; Thomas, A W

    2016-03-04

    We report the first use of the effective quark-meson coupling (QMC) energy density functional (EDF), derived from a quark model of hadron structure, to study a broad range of ground state properties of even-even nuclei across the periodic table in the nonrelativistic Hartree-Fock+BCS framework. The novelty of the QMC model is that the nuclear medium effects are treated through modification of the internal structure of the nucleon. The density dependence is microscopically derived and the spin-orbit term arises naturally. The QMC EDF depends on a single set of four adjustable parameters having a clear physics basis. When applied to diverse ground state data the QMC EDF already produces, in its present simple form, overall agreement with experiment of a quality comparable to a representative Skyrme EDF. There exist, however, multiple Skyrme parameter sets, frequently tailored to describe selected nuclear phenomena. The QMC EDF set of fewer parameters, derived in this work, is not open to such variation, chosen set being applied, without adjustment, to both the properties of finite nuclei and nuclear matter.

  11. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  12. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  13. Nonlinear micromechanics-based finite element analysis of the interfacial behaviour of FRP-strengthened reinforced concrete beams

    NASA Astrophysics Data System (ADS)

    Abd El Baky, Hussien

    This research work is devoted to theoretical and numerical studies on the flexural behaviour of FRP-strengthened concrete beams. The objectives of this research are to extend and generalize the results of simple experiments, to recommend new design guidelines based on accurate numerical tools, and to enhance our comprehension of the bond performance of such beams. These numerical tools can be exploited to bridge the existing gaps in the development of analysis and modelling approaches that can predict the behaviour of FRP-strengthened concrete beams. The research effort here begins with the formulation of a concrete model and development of FRP/concrete interface constitutive laws, followed by finite element simulations for beams strengthened in flexure. Finally, a statistical analysis is carried out taking the advantage of the aforesaid numerical tools to propose design guidelines. In this dissertation, an alternative incremental formulation of the M4 microplane model is proposed to overcome the computational complexities associated with the original formulation. Through a number of numerical applications, this incremental formulation is shown to be equivalent to the original M4 model. To assess the computational efficiency of the incremental formulation, the "arc-length" numerical technique is also considered and implemented in the original Bazant et al. [2000] M4 formulation. Finally, the M4 microplane concrete model is coded in FORTRAN and implemented as a user-defined subroutine into the commercial software package ADINA, Version 8.4. Then this subroutine is used with the finite element package to analyze various applications involving FRP strengthening. In the first application a nonlinear micromechanics-based finite element analysis is performed to investigate the interfacial behaviour of FRP/concrete joints subjected to direct shear loadings. The intention of this part is to develop a reliable bond--slip model for the FRP/concrete interface. The bond--slip relation is developed considering the interaction between the interfacial normal and shear stress components along the bonded length. A new approach is proposed to describe the entire tau-s relationship based on three separate models. The first model captures the shear response of an orthotropic FRP laminate. The second model simulates the shear characteristics of an adhesive layer, while the third model represents the shear nonlinearity of a thin layer inside the concrete, referred to as the interfacial layer. The proposed bond--slip model reflects the geometrical and material characteristics of the FRP, concrete, and adhesive layers. Two-dimensional and three-dimensional nonlinear displacement-controlled finite element (FE) models are then developed to investigate the flexural and FRP/concrete interfacial responses of FRP-strengthened reinforced concrete beams. The three-dimensional finite element model is created to accommodate cases of beams having FRP anchorage systems. Discrete interface elements are proposed and used to simulate the FRP/concrete interfacial behaviour before and after cracking. The FE models are capable of simulating the various failure modes, including debonding of the FRP either at the plate end or at intermediate cracks. Particular attention is focused on the effect of crack initiation and propagation on the interfacial behaviour. This study leads to an accurate and refined interpretation of the plate-end and intermediate crack debonding failure mechanisms for FRP-strengthened beams with and without FRP anchorage systems. Finally, the FE models are used to conduct a parametric study to generalize the findings of the FE analysis. The variables under investigation include two material characteristics; namely, the concrete compressive strength and axial stiffness of the FRP laminates as well as three geometric properties; namely, the steel reinforcement ratio, the beam span length and the beam depth. The parametric study is followed by a statistical analysis for 43 strengthened beams involving the five aforementioned variables. The response surface methodology (RSM) technique is employed to optimize the accuracy of the statistical models while minimizing the numbers of finite element runs. In particular, a face-centred design (FCD) is applied to evaluate the influence of the critical variables on the debonding load and debonding strain limits in the FRP laminates. Based on these statistical models, a nonlinear statistical regression analysis is used to propose design guidelines for the FRP flexural strengthening of reinforced concrete beams. (Abstract shortened by UMI.)

  14. DOMAIN DECOMPOSITION METHOD APPLIED TO A FLOW PROBLEM Norberto C. Vera Guzmán Institute of Geophysics, UNAM

    NASA Astrophysics Data System (ADS)

    Vera, N. C.; GMMC

    2013-05-01

    In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.

  15. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    NASA Astrophysics Data System (ADS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  16. Anderson transition in a three-dimensional kicked rotor

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; García-García, Antonio M.

    2009-03-01

    We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.

  17. Roots and Rogues in German Child Language

    ERIC Educational Resources Information Center

    Duffield, Nigel

    2008-01-01

    This article is concerned with the proper characterization of subject omission at a particular stage in German child language. It focuses on post-verbal null subjects in finite clauses, here termed Rogues. It is argued that the statistically significant presence of Rogues, in conjunction with their distinct developmental profile, speaks against a…

  18. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules.

  19. A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2007-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.

  20. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  1. Sets that Contain Their Circle Centers

    ERIC Educational Resources Information Center

    Martin, Greg

    2008-01-01

    Say that a subset S of the plane is a "circle-center set" if S is not a subset of a line, and whenever we choose three non-collinear points from S, the center of the circle through those three points is also an element of S. A problem appearing on the Macalester College Problem of the Week website stated that a finite set of points in the plane,…

  2. Three-dimensional eddy current solution of a polyphase machine test model (abstract)

    NASA Astrophysics Data System (ADS)

    Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado

    1994-05-01

    This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.

  3. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.

  4. Frequency analysis via the method of moment functionals

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.; Pan, J. Q.

    1990-01-01

    Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.

  5. Three-dimensional elastic stress and displacement analysis of finite circular geometry solids containing cracks

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.; Mendelson, A.; Kring, J.

    1973-01-01

    A seminumerical method is presented for solving a set of coupled partial differential equations subject to mixed and coupled boundary conditions. The use of this method is illustrated by obtaining solutions for two circular geometry and mixed boundary value problems in three-dimensional elasticity. Stress and displacement distributions are calculated in an axisymmetric, circular bar of finite dimensions containing a penny-shaped crack. Approximate results for an annular plate containing internal surface cracks are also presented.

  6. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  7. Reduced modeling of flexible structures for decentralized control

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Tan, T. M.; Bahar, L. Y.; Konstantinidis, M. F.

    1986-01-01

    Based upon the modified finite element-transfer matrix method, this paper presents a technique for reduced modeling of flexible structures for decentralized control. The modeling decisions are carried out at (finite-) element level, and are dictated by control objectives. A simply supported beam with two sets of actuators and sensors (linear force actuator and linear position and velocity sensors) is considered for illustration. In this case, it is conjectured that the decentrally controlled closed loop system is guaranteed to be at least marginally stable.

  8. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  9. A path-oriented matrix-based knowledge representation system

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan; Karamouzis, Stamos T.

    1993-01-01

    Experience has shown that designing a good representation is often the key to turning hard problems into simple ones. Most AI (Artificial Intelligence) search/representation techniques are oriented toward an infinite domain of objects and arbitrary relations among them. In reality much of what needs to be represented in AI can be expressed using a finite domain and unary or binary predicates. Well-known vector- and matrix-based representations can efficiently represent finite domains and unary/binary predicates, and allow effective extraction of path information by generalized transitive closure/path matrix computations. In order to avoid space limitations a set of abstract sparse matrix data types was developed along with a set of operations on them. This representation forms the basis of an intelligent information system for representing and manipulating relational data.

  10. Mutually unbiased projectors and duality between lines and bases in finite quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, M.; Vourdas, A., E-mail: a.vourdas@bradford.ac.uk

    2013-10-15

    Quantum systems with variables in the ring Z(d) are considered, and the concepts of weak mutually unbiased bases and mutually unbiased projectors are discussed. The lines through the origin in the Z(d)×Z(d) phase space, are classified into maximal lines (sets of d points), and sublines (sets of d{sub i} points where d{sub i}|d). The sublines are intersections of maximal lines. It is shown that there exists a duality between the properties of lines (resp., sublines), and the properties of weak mutually unbiased bases (resp., mutually unbiased projectors). -- Highlights: •Lines in discrete phase space. •Bases in finite quantum systems. •Dualitymore » between bases and lines. •Weak mutually unbiased bases.« less

  11. Cotton-Mouton effect and shielding polarizabilities of ethylene: An MCSCF study

    NASA Astrophysics Data System (ADS)

    Coriani, Sonia; Rizzo, Antonio; Ruud, Kenneth; Helgaker, Trygve

    1997-03-01

    The static hypermagnetizabilities and nuclear shielding polarizabilities of the carbon and hydrogen atoms of ethylene have been computed using multiconfigurational linear-response theory and a finite-field method, in a mixed analytical-numerical approach. Extended sets of magnetic-field-dependent basis functions have been employed in large MCSCF calculations, involving active spaces giving rise to a few million configurations in the finite-field perturbed symmetry. The convergence of the observables with respect to the extension of the basis set as well as the effect of electron correlation have been investigated. Whereas for the shielding polarizabilities we can compare with other published SCF results, the ab initio estimates for the static hypermagnetizabilities and the observable to which they are related - the Cotton-Mouton constant, - are presented for the first time.

  12. Influence of polygonal wear of railway wheels on the wheel set axle stress

    NASA Astrophysics Data System (ADS)

    Wu, Xingwen; Chi, Maoru; Wu, Pingbo

    2015-11-01

    The coupled vehicle/track dynamic model with the flexible wheel set was developed to investigate the effects of polygonal wear on the dynamic stresses of the wheel set axle. In the model, the railway vehicle was modelled by the rigid multibody dynamics. The wheel set was established by the finite element method to analyse the high-frequency oscillation and dynamic stress of wheel set axle induced by the polygonal wear based on the modal stress recovery method. The slab track model was taken into account in which the rail was described by the Timoshenko beam and the three-dimensional solid finite element was employed to establish the concrete slab. Furthermore, the modal superposition method was adopted to calculate the dynamic response of the track. The wheel/rail normal forces and the tangent forces were, respectively, determined by the Hertz nonlinear contact theory and the Shen-Hedrick-Elkins model. Using the coupled vehicle/track dynamic model, the dynamic stresses of wheel set axle with consideration of the ideal polygonal wear and measured polygonal wear were investigated. The results show that the amplitude of wheel/rail normal forces and the dynamic stress of wheel set axle increase as the vehicle speeds rise. Moreover, the impact loads induced by the polygonal wear could excite the resonance of wheel set axle. In the resonance region, the amplitude of the dynamic stress for the wheel set axle would increase considerably comparing with the normal conditions.

  13. Finite State Models of Manned Systems: Validation, Simplification, and Extension.

    DTIC Science & Technology

    1979-11-01

    model a time set is needed. A time set is some set T together with a binary relation defined on T which linearly orders the set. If "model time" is...discrete, so is T ; continuous time is represented by a set corresponding to a subset of the non-negative real numbers. In the following discussion time...defined as sequences, over time, of input and outIut values. The notion of sequences or trajectories is formalized as: AT = xx: T -- Al BT = tyIy: T -4BJ AT

  14. Mathematical analysis on the cosets of subgroup in the group of E-convex sets

    NASA Astrophysics Data System (ADS)

    Abbas, Nada Mohammed; Ajeena, Ruma Kareem K.

    2018-05-01

    In this work, analyzing the cosets of the subgroup in the group of L – convex sets is presented as a new and powerful tool in the topics of the convex analysis and abstract algebra. On L – convex sets, the properties of these cosets are proved mathematically. Most important theorem on a finite group of L – convex sets theory which is the Lagrange’s Theorem has been proved. As well as, the mathematical proof of the quotient group of L – convex sets is presented.

  15. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    PubMed

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  16. A combined experimental and finite element approach to analyse the fretting mechanism of the head-stem taper junction in total hip replacement.

    PubMed

    Bitter, Thom; Khan, Imran; Marriott, Tim; Lovelady, Elaine; Verdonschot, Nico; Janssen, Dennis

    2017-09-01

    Fretting corrosion at the taper interface of modular hip implants has been implicated as a possible cause of implant failure. This study was set up to gain more insight in the taper mechanics that lead to fretting corrosion. The objectives of this study therefore were (1) to select experimental loading conditions to reproduce clinically relevant fretting corrosion features observed in retrieved components, (2) to develop a finite element model consistent with the fretting experiments and (3) to apply more complicated loading conditions of activities of daily living to the finite element model to study the taper mechanics. The experiments showed similar wear patterns on the taper surface as observed in retrievals. The finite element wear score based on Archard's law did not correlate well with the amount of material loss measured in the experiments. However, similar patterns were observed between the simulated micromotions and the experimental wear measurements. Although the finite element model could not be validated, the loading conditions based on activities of daily living demonstrate the importance of assembly load on the wear potential. These findings suggest that finite element models that do not incorporate geometry updates to account for wear loss may not be appropriate to predict wear volumes of taper connections.

  17. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES

    PubMed Central

    Wan, Xiaohai; Li, Zhilin

    2012-01-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size. PMID:22701346

  18. SOME NEW FINITE DIFFERENCE METHODS FOR HELMHOLTZ EQUATIONS ON IRREGULAR DOMAINS OR WITH INTERFACES.

    PubMed

    Wan, Xiaohai; Li, Zhilin

    2012-06-01

    Solving a Helmholtz equation Δu + λu = f efficiently is a challenge for many applications. For example, the core part of many efficient solvers for the incompressible Navier-Stokes equations is to solve one or several Helmholtz equations. In this paper, two new finite difference methods are proposed for solving Helmholtz equations on irregular domains, or with interfaces. For Helmholtz equations on irregular domains, the accuracy of the numerical solution obtained using the existing augmented immersed interface method (AIIM) may deteriorate when the magnitude of λ is large. In our new method, we use a level set function to extend the source term and the PDE to a larger domain before we apply the AIIM. For Helmholtz equations with interfaces, a new maximum principle preserving finite difference method is developed. The new method still uses the standard five-point stencil with modifications of the finite difference scheme at irregular grid points. The resulting coefficient matrix of the linear system of finite difference equations satisfies the sign property of the discrete maximum principle and can be solved efficiently using a multigrid solver. The finite difference method is also extended to handle temporal discretized equations where the solution coefficient λ is inversely proportional to the mesh size.

  19. Physical modelling of the transport of biogenic emissions in and above a finite forest area

    NASA Astrophysics Data System (ADS)

    Aubrun, S.; Leitl, B.; Schatzmann, M.

    2003-04-01

    This study takes part to the project “Emission and CHemical transformation of biogenic volatile Organic compounds: investigations in and above a mixed forest stand” (ECHO) funded by the German atmospheric research program AFO 2000. The contribution of Hamburg University is a better understanding of the transport of biogenic emissions in the atmospheric boundary layer influenced by a very rough environment as a finite forest area. The finite forest area surrounding the Research Centre of Jülich (Germany) was modelled to a scale of 1:300 and studied in the large boundary layer wind tunnel of the Meteorological institute of Hamburg University. The model of the forest must reproduce the resistance to the wind generated by this porous environment. Using rings of metallic mesh to represent some group of trees, some preliminary tests were carried out to find the arrangement of these rings that would provide the appropriate aerodynamic characteristics for a forest. The terrain which precedes the finite forest area, is characteristic of farmlands therefore the approaching flow in the wind tunnel was carefully designed to follow all the aerodynamic properties of a neutral atmospheric boundary layer, developed on a moderately rough surface (cf. VDI guideline 3783). Subsequently, some investigations consisting of dispersion measurements were carried out to reproduce the field tracer-gas experiments processed by the Research Centre of Jülich. The comparison was satisfying and guarantied the quality of the physical model. The constant flow conditions provided by a wind tunnel give the possibility to study the influence of the averaging time on the deduced statistical results. As a consequence, the project was able to directly contribute to quality assurance of field data since one can qualify the reliability and the representativeness of such short-term mean values (averaging time between 10 and 80 minutes). Combined field and laboratory data also provided a data set for model validation purposes. According to the needs of the other project partners, future investigations in wind tunnel will be performed to determine the origin of the emissions received at the specified locations during field campaigns, and to assess their transit time. Some systematic measurements will then be performed to determine the vertical fluxes above the forest area, responsible for the vertical transport of the biogenic emissions.

  20. Set-theoretic estimation of hybrid system configurations.

    PubMed

    Benazera, Emmanuel; Travé-Massuyès, Louise

    2009-10-01

    Hybrid systems serve as a powerful modeling paradigm for representing complex continuous controlled systems that exhibit discrete switches in their dynamics. The system and the models of the system are nondeterministic due to operation in uncertain environment. Bayesian belief update approaches to stochastic hybrid system state estimation face a blow up in the number of state estimates. Therefore, most popular techniques try to maintain an approximation of the true belief state by either sampling or maintaining a limited number of trajectories. These limitations can be avoided by using bounded intervals to represent the state uncertainty. This alternative leads to splitting the continuous state space into a finite set of possibly overlapping geometrical regions that together with the system modes form configurations of the hybrid system. As a consequence, the true system state can be captured by a finite number of hybrid configurations. A set of dedicated algorithms that can efficiently compute these configurations is detailed. Results are presented on two systems of the hybrid system literature.

  1. Multiscale Modeling of Polycrystalline NiTi Shape Memory Alloy under Various Plastic Deformation Conditions by Coupling Microstructure Evolution and Macroscopic Mechanical Response

    PubMed Central

    Jiang, Shuyong; Zhou, Tao; Tu, Jian; Shi, Laixin; Chen, Qiang; Yang, Mingbo

    2017-01-01

    Numerical modeling of microstructure evolution in various regions during uniaxial compression and canning compression of NiTi shape memory alloy (SMA) are studied through combined macroscopic and microscopic finite element simulation in order to investigate plastic deformation of NiTi SMA at 400 °C. In this approach, the macroscale material behavior is modeled with a relatively coarse finite element mesh, and then the corresponding deformation history in some selected regions in this mesh is extracted by the sub-model technique of finite element code ABAQUS and subsequently used as boundary conditions for the microscale simulation by means of crystal plasticity finite element method (CPFEM). Simulation results show that NiTi SMA exhibits an inhomogeneous plastic deformation at the microscale. Moreover, regions that suffered canning compression sustain more homogeneous plastic deformation by comparison with the corresponding regions subjected to uniaxial compression. The mitigation of inhomogeneous plastic deformation contributes to reducing the statistically stored dislocation (SSD) density in polycrystalline aggregation and also to reducing the difference of stress level in various regions of deformed NiTi SMA sample, and therefore sustaining large plastic deformation in the canning compression process. PMID:29027925

  2. Multiscale Modeling of Polycrystalline NiTi Shape Memory Alloy under Various Plastic Deformation Conditions by Coupling Microstructure Evolution and Macroscopic Mechanical Response.

    PubMed

    Hu, Li; Jiang, Shuyong; Zhou, Tao; Tu, Jian; Shi, Laixin; Chen, Qiang; Yang, Mingbo

    2017-10-13

    Numerical modeling of microstructure evolution in various regions during uniaxial compression and canning compression of NiTi shape memory alloy (SMA) are studied through combined macroscopic and microscopic finite element simulation in order to investigate plastic deformation of NiTi SMA at 400 °C. In this approach, the macroscale material behavior is modeled with a relatively coarse finite element mesh, and then the corresponding deformation history in some selected regions in this mesh is extracted by the sub-model technique of finite element code ABAQUS and subsequently used as boundary conditions for the microscale simulation by means of crystal plasticity finite element method (CPFEM). Simulation results show that NiTi SMA exhibits an inhomogeneous plastic deformation at the microscale. Moreover, regions that suffered canning compression sustain more homogeneous plastic deformation by comparison with the corresponding regions subjected to uniaxial compression. The mitigation of inhomogeneous plastic deformation contributes to reducing the statistically stored dislocation (SSD) density in polycrystalline aggregation and also to reducing the difference of stress level in various regions of deformed NiTi SMA sample, and therefore sustaining large plastic deformation in the canning compression process.

  3. Universal scaling for the quantum Ising chain with a classical impurity

    NASA Astrophysics Data System (ADS)

    Apollaro, Tony J. G.; Francica, Gianluca; Giuliano, Domenico; Falcone, Giovanni; Palma, G. Massimo; Plastina, Francesco

    2017-10-01

    We study finite-size scaling for the magnetic observables of an impurity residing at the end point of an open quantum Ising chain with transverse magnetic field, realized by locally rescaling the field by a factor μ ≠1 . In the homogeneous chain limit at μ =1 , we find the expected finite-size scaling for the longitudinal impurity magnetization, with no specific scaling for the transverse magnetization. At variance, in the classical impurity limit μ =0 , we recover finite scaling for the longitudinal magnetization, while the transverse one basically does not scale. We provide both analytic approximate expressions for the magnetization and the susceptibility as well as numerical evidences for the scaling behavior. At intermediate values of μ , finite-size scaling is violated, and we provide a possible explanation of this result in terms of the appearance of a second, impurity-related length scale. Finally, by going along the standard quantum-to-classical mapping between statistical models, we derive the classical counterpart of the quantum Ising chain with an end-point impurity as a classical Ising model on a square lattice wrapped on a half-infinite cylinder, with the links along the first circle modified as a function of μ .

  4. Quantitative evaluation of cross correlation between two finite-length time series with applications to single-molecule FRET.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-11-06

    The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.

  5. Modified two-sources quantum statistical model and multiplicity fluctuation in the finite rapidity region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, D.; Sarkar, S.; Sen, S.

    1995-06-01

    In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,`` has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model`` as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of {sup 16}O-Ag/Br and {sup 24}Mg-Ag/Br interactions at a few tens of GeV energy regime.

  6. Effects of finite hot-wire spatial resolution on turbulence statistics and velocity spectra in a round turbulent free jet

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamed; Lavoie, Philippe; Pollard, Andrew

    2018-03-01

    The effect of finite hot-wire spatial resolution on turbulence statistics and velocity spectra in a round turbulent free jet is investigated. To quantify spatial resolution effects, measurements were taken using a nano-scale thermal anemometry probe (NSTAP) and compared to results from conventional hot-wires with sensing lengths of l=0.5 and 1 mm. The NSTAP has a sensing length significantly smaller than the Kolmogorov length scale η for the present experimental conditions, whereas the sensing lengths for the conventional probes are larger than η. The spatial resolution is found to have a significant impact on the dissipation both on and off the jet centreline with the NSTAP results exceeding those obtained from the conventional probes. The resolution effects along the jet centreline are adequately predicted using a Wyngaard-type spectral technique (Wyngaard in J Sci Instr 1(2):1105-1108,1968), but additional attenuation on the measured turbulence quantities are observed off the centreline. The magnitude of this attenuation is a function of both the ratio of wire length to Kolmogorov length scale and the magnitude of the shear. The effect of spatial resolution is noted to have an impact on the power-law decay parameters for the turbulent kinetic energy that is computed. The effect of spatial filtering on the streamwise dissipation energy spectra is also considered. Empirical functions are proposed to estimate the effect of finite resolution, which take into account the mean shear.

  7. Statistics of excitations in the electron glass model

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  8. Relations between heat exchange and Rényi divergences

    NASA Astrophysics Data System (ADS)

    Wei, Bo-Bo

    2018-04-01

    In this work, we establish an exact relation which connects the heat exchange between two systems initialized in their thermodynamic equilibrium states at different temperatures and the Rényi divergences between the initial thermodynamic equilibrium state and the final nonequilibrium state of the total system. The relation tells us that the various moments of the heat statistics are determined by the Renyi divergences between the initial equilibrium state and the final nonequilibrium state of the global system. In particular the average heat exchange is quantified by the relative entropy between the initial equilibrium state and the final nonequilibrium state of the global system. The relation is applicable to both finite classical systems and finite quantum systems.

  9. Relations between heat exchange and Rényi divergences.

    PubMed

    Wei, Bo-Bo

    2018-04-01

    In this work, we establish an exact relation which connects the heat exchange between two systems initialized in their thermodynamic equilibrium states at different temperatures and the Rényi divergences between the initial thermodynamic equilibrium state and the final nonequilibrium state of the total system. The relation tells us that the various moments of the heat statistics are determined by the Renyi divergences between the initial equilibrium state and the final nonequilibrium state of the global system. In particular the average heat exchange is quantified by the relative entropy between the initial equilibrium state and the final nonequilibrium state of the global system. The relation is applicable to both finite classical systems and finite quantum systems.

  10. Condensate fluctuations of interacting Bose gases within a microcanonical ensemble.

    PubMed

    Wang, Jianhui; He, Jizhou; Ma, Yongli

    2011-05-01

    Based on counting statistics and Bogoliubov theory, we present a recurrence relation for the microcanonical partition function for a weakly interacting Bose gas with a finite number of particles in a cubic box. According to this microcanonical partition function, we calculate numerically the distribution function, condensate fraction, and condensate fluctuations for a finite and isolated Bose-Einstein condensate. For ideal and weakly interacting Bose gases, we compare the condensate fluctuations with those in the canonical ensemble. The present approach yields an accurate account of the condensate fluctuations for temperatures close to the critical region. We emphasize that the interactions between excited atoms turn out to be important for moderate temperatures.

  11. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  12. The detection and stabilisation of limit cycle for deterministic finite automata

    NASA Astrophysics Data System (ADS)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  13. Magnitude of finite-nucleus-size effects in relativistic density functional computations of indirect NMR nuclear spin-spin coupling constants.

    PubMed

    Autschbach, Jochen

    2009-09-14

    A spherical Gaussian nuclear charge distribution model has been implemented for spin-free (scalar) and two-component (spin-orbit) relativistic density functional calculations of indirect NMR nuclear spin-spin coupling (J-coupling) constants. The finite nuclear volume effects on the hyperfine integrals are quite pronounced and as a consequence they noticeably alter coupling constants involving heavy NMR nuclei such as W, Pt, Hg, Tl, and Pb. Typically, the isotropic J-couplings are reduced in magnitude by about 10 to 15 % for couplings between one of the heaviest NMR nuclei and a light atomic ligand, and even more so for couplings between two heavy atoms. For a subset of the systems studied, viz. the Hg atom, Hg(2) (2+), and Tl--X where X=Br, I, the basis set convergence of the hyperfine integrals and the coupling constants was monitored. For the Hg atom, numerical and basis set calculations of the electron density and the 1s and 6s orbital hyperfine integrals are directly compared. The coupling anisotropies of TlBr and TlI increase by about 2 % due to finite-nucleus effects.

  14. Integrable subsectors from holography

    NASA Astrophysics Data System (ADS)

    de Mello Koch, Robert; Kim, Minkyoo; Van Zyl, Hendrik J. R.

    2018-05-01

    We consider operators in N=4 super Yang-Mills theory dual to closed string states propagating on a class of LLM geometries. The LLM geometries we consider are specified by a boundary condition that is a set of black rings on the LLM plane. When projected to the LLM plane, the closed strings are polygons with all corners lying on the outer edge of a single ring. The large N limit of correlators of these operators receives contributions from non-planar diagrams even for the leading large N dynamics. Our interest in these fluctuations is because a previous weak coupling analysis argues that the net effect of summing the huge set of non-planar diagrams, is a simple rescaling of the 't Hooft coupling. We carry out some nontrivial checks of this proposal. Using the su(2|2)2 symmetry we determine the two magnon S-matrix and demonstrate that it agrees, up to two loops, with a weak coupling computation performed in the CFT. We also compute the first finite size corrections to both the magnon and the dyonic magnon by constructing solutions to the Nambu-Goto action that carry finite angular momentum. These finite size computations constitute a strong coupling confirmation of the proposal.

  15. Finite Element Solution of Unsteady Mixed Convection Flow of Micropolar Fluid over a Porous Shrinking Sheet

    PubMed Central

    Gupta, Diksha; Singh, Bani

    2014-01-01

    The objective of this investigation is to analyze the effect of unsteadiness on the mixed convection boundary layer flow of micropolar fluid over a permeable shrinking sheet in the presence of viscous dissipation. At the sheet a variable distribution of suction is assumed. The unsteadiness in the flow and temperature fields is caused by the time dependence of the shrinking velocity and surface temperature. With the aid of similarity transformations, the governing partial differential equations are transformed into a set of nonlinear ordinary differential equations, which are solved numerically, using variational finite element method. The influence of important physical parameters, namely, suction parameter, unsteadiness parameter, buoyancy parameter and Eckert number on the velocity, microrotation, and temperature functions is investigated and analyzed with the help of their graphical representations. Additionally skin friction and the rate of heat transfer have also been computed. Under special conditions, an exact solution for the flow velocity is compared with the numerical results obtained by finite element method. An excellent agreement is observed for the two sets of solutions. Furthermore, to verify the convergence of numerical results, calculations are conducted with increasing number of elements. PMID:24672310

  16. Finding Groups Using Model-Based Cluster Analysis: Heterogeneous Emotional Self-Regulatory Processes and Heavy Alcohol Use Risk

    ERIC Educational Resources Information Center

    Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.

    2008-01-01

    Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…

  17. Overview Of Recent Enhancements To The Bumper-II Meteoroid and Orbital Debris Risk Assessment Tool

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.; Prior, Thomas G.

    2006-01-01

    Discussion includes recent enhancements to the BUMPER-II program and input files in support of Shuttle Return to Flight. Improvements to the mesh definitions of the finite element input model will be presented. A BUMPER-II analysis process that was used to estimate statistical uncertainty is introduced.

  18. Determination of the Rotational Barrier in Ethane by Vibrational Spectroscopy and Statistical Thermodynamics

    ERIC Educational Resources Information Center

    Ercolani, Gianfranco

    2005-01-01

    The finite-difference boundary-value method is a numerical method suited for the solution of the one-dimensional Schrodinger equation encountered in problems of hindered rotation. Further, the application of the method, in combination with experimental results for the evaluation of the rotational energy barrier in ethane is presented.

  19. Statistical mechanics of a single particle in a multiscale random potential: Parisi landscapes in finite-dimensional Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Bouchaud, Jean-Philippe

    2008-08-01

    We construct an N-dimensional Gaussian landscape with multiscale, translation invariant, logarithmic correlations and investigate the statistical mechanics of a single particle in this environment. In the limit of high dimension N → ∞ the free energy of the system and overlap function are calculated exactly using the replica trick and Parisi's hierarchical ansatz. In the thermodynamic limit, we recover the most general version of the Derrida's generalized random energy model (GREM). The low-temperature behaviour depends essentially on the spectrum of length scales involved in the construction of the landscape. If the latter consists of K discrete values, the system is characterized by a K-step replica symmetry breaking solution. We argue that our construction is in fact valid in any finite spatial dimensions N >= 1. We discuss the implications of our results for the singularity spectrum describing multifractality of the associated Boltzmann-Gibbs measure. Finally we discuss several generalizations and open problems, such as the dynamics in such a landscape and the construction of a generalized multifractal random walk.

  20. Codifference as a practical tool to measure interdependence

    NASA Astrophysics Data System (ADS)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  1. Space-time-modulated stochastic processes

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  2. Mutual interference between statistical summary perception and statistical learning.

    PubMed

    Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B

    2011-09-01

    The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.

  3. Counting statistics for genetic switches based on effective interaction approximation

    NASA Astrophysics Data System (ADS)

    Ohkubo, Jun

    2012-09-01

    Applicability of counting statistics for a system with an infinite number of states is investigated. The counting statistics has been studied a lot for a system with a finite number of states. While it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle, we have non-closed equations in general. A simple genetic switch can be described by a master equation with an infinite number of states, and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene. To avoid having the non-closed equations, an effective interaction approximation is employed. As a result, it is shown that the switching problem can be treated as a simple two-state model approximately, which immediately indicates that the switching obeys non-Poisson statistics.

  4. Three-dimensional finite element analysis and comparison of a new intramedullary fixation with interlocking intramedullary nail.

    PubMed

    Liu, Chang-cheng; Xing, Wen-zhao; Zhang, Ya-xing; Pan, Zheng-hua; Feng, Wen-ling

    2015-03-01

    This study was set to introduce a new intramedullary fixation, explore its biomechanical properties, and provide guidance for further biomechanical experiments. With the help of CT scans and finite element modeling software, finite element model was established for a new intramedullary fixation and intramedullary nailing of femoral shaft fractures in a volunteer adult. By finite element analysis software ANSYS 10.0, we conducted 235-2,100 N axial load, 200-1,000 N bending loads and 2-15 Nm torsional loading, respectively, and analyzed maximum stress distribution, size, and displacement of the fracture fragments of the femur and intramedullary nail. During the loading process, the maximum stress of our new intramedullary fixation were within the normal range, and the displacement of the fracture fragments was less than 1 mm. Our new intramedullary fixation exhibited mechanical reliability and unique advantages of anti-rotation, which provides effective supports during fracture recovery.

  5. Bolt clampup relaxation in a graphite/epoxy laminate

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Crews, J. H., Jr.

    1982-01-01

    A simple bolted joint was analyzed to calculate bolt clampup relaxation for a graphite/epoxy (T300/5208) laminate. A viscoelastic finite element analysis of a double-lap joint with a steel bolt was conducted. Clampup forces were calculated for various steady-state temperature-moisture conditions using a 20-year exposure duration. The finite element analysis predicted that clampup forces relax even for the room-temperature-dry condition. The relaxations were 8, 13, 20, and 30 percent for exposure durations of 1 day, 1 month, 1 year, and 20 years, respectively. As expected, higher temperatures and moisture levels each increased the relaxation rate. The combined viscoelastic effects of steady-state temperature and moisture appeared to be additive. From the finite-element analysis, a simple equation was developed for clampup force relaxation. This generalized equation was used to calculate clampup forces for the same temperature-moisture conditions as used in the finite-element analysis. The two sets of calculated results agreed well.

  6. A mimetic finite difference method for the Stokes problem with elected edge bubbles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, K; Berirao, L

    2009-01-01

    A new mimetic finite difference method for the Stokes problem is proposed and analyzed. The unstable P{sub 1}-P{sub 0} discretization is stabilized by adding a small number of bubble functions to selected mesh edges. A simple strategy for selecting such edges is proposed and verified with numerical experiments. The discretizations schemes for Stokes and Navier-Stokes equations must satisfy the celebrated inf-sup (or the LBB) stability condition. The stability condition implies a balance between discrete spaces for velocity and pressure. In finite elements, this balance is frequently achieved by adding bubble functions to the velocity space. The goal of this articlemore » is to show that the stabilizing edge bubble functions can be added only to a small set of mesh edges. This results in a smaller algebraic system and potentially in a faster calculations. We employ the mimetic finite difference (MFD) discretization technique that works for general polyhedral meshes and can accomodate non-uniform distribution of stabilizing bubbles.« less

  7. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    PubMed

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Unified control/structure design and modeling research

    NASA Technical Reports Server (NTRS)

    Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.

    1986-01-01

    To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.

  9. Finite plate thickness effects on the Rayleigh-Taylor instability in elastic-plastic materials

    NASA Astrophysics Data System (ADS)

    Polavarapu, Rinosh; Banerjee, Arindam

    2017-11-01

    The majority of theoretical studies have tackled the Rayleigh-Taylor instability (RTI) problem in solids using an infinitely thick plate. Recent theoretical studies by Piriz et al. (PRE 95, 053108, 2017) have explored finite thickness effects. We seek to validate this recent theoretical estimate experimentally using our rotating wheel RTI experiment in an accelerated elastic-plastic material. The test section consists of a container filled with air and mayonnaise (a non-Newtonian emulsion) with an initial perturbation between two materials. The plate thickness effects are studied by varying the depth of the soft-solid. A set of experiments is run by employing different initial conditions with different container dimensions. Additionally, the effect of acceleration rate (driving pressure rise time) on the instability threshold with reference to the finite thickness will also be inspected. Furthermore, the experimental results are compared to the analytical strength models related to finite thickness effects on RTI. Authors acknowledge financial support from DOE-SSAA Grant # DE-NA0003195 and LANL subcontract #370333.

  10. Numerical analysis for finite-range multitype stochastic contact financial market dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ge; Wang, Jun; Fang, Wen, E-mail: fangwen@bjtu.edu.cn

    In an attempt to reproduce and study the dynamics of financial markets, a random agent-based financial price model is developed and investigated by the finite-range multitype contact dynamic system, in which the interaction and dispersal of different types of investment attitudes in a stock market are imitated by viruses spreading. With different parameters of birth rates and finite-range, the normalized return series are simulated by Monte Carlo simulation method and numerical studied by power-law distribution analysis and autocorrelation analysis. To better understand the nonlinear dynamics of the return series, a q-order autocorrelation function and a multi-autocorrelation function are also definedmore » in this work. The comparisons of statistical behaviors of return series from the agent-based model and the daily historical market returns of Shanghai Composite Index and Shenzhen Component Index indicate that the proposed model is a reasonable qualitative explanation for the price formation process of stock market systems.« less

  11. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  12. The Statistical Mechanics of Ideal Homogeneous Turbulence

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.

    2002-01-01

    Plasmas, such as those found in the space environment or in plasma confinement devices, are often modeled as electrically conducting fluids. When fluids and plasmas are energetically stirred, regions of highly nonlinear, chaotic behavior known as turbulence arise. Understanding the fundamental nature of turbulence is a long-standing theoretical challenge. The present work describes a statistical theory concerning a certain class of nonlinear, finite dimensional, dynamical models of turbulence. These models arise when the partial differential equations describing incompressible, ideal (i.e., nondissipative) homogeneous fluid and magnetofluid (i.e., plasma) turbulence are Fourier transformed into a very large set of ordinary differential equations. These equations define a divergenceless flow in a high-dimensional phase space, which allows for the existence of a Liouville theorem, guaranteeing a distribution function based on constants of the motion (integral invariants). The novelty of these particular dynamical systems is that there are integral invariants other than the energy, and that some of these invariants behave like pseudoscalars under two of the discrete symmetry transformations of physics, parity, and charge conjugation. In this work the 'rugged invariants' of ideal homogeneous turbulence are shown to be the only significant scalar and pseudoscalar invariants. The discovery that pseudoscalar invariants cause symmetries of the original equations to be dynamically broken and induce a nonergodic structure on the associated phase space is the primary result presented here. Applicability of this result to dissipative turbulence is also discussed.

  13. Finite Element Analysis of Patella Alta: A Patellofemoral Instability Model.

    PubMed

    Watson, Nicole A; Duchman, Kyle R; Grosland, Nicole M; Bollier, Matthew J

    2017-01-01

    This study aims to provide biomechanical data on the effect of patella height in the setting of medial patellofemoral ligament (MPFL) reconstruction using finite element analysis. The study will also examine patellofemoral joint biomechanics using variable femoral insertion sites for MPFL reconstruction. A previously validated finite element knee model was modified to study patella alta and baja by translating the patella a given distance to achieve each patella height ratio. Additionally, the models were modified to study various femoral insertion sites of the MPFL (anatomic, anterior, proximal, and distal) for each patella height model, resulting in 32 unique scenarios available for investigation. In the setting of patella alta, the patellofemoral contact area decreased, resulting in a subsequent increase in maximum patellofemoral contact pressures as compared to the scenarios with normal patellar height. Additionally, patella alta resulted in decreased lateral restraining forces in the native knee scenario as well as following MPFL reconstruction. Changing femoral insertion sites had a variable effect on patellofemoral contact pressures; however, distal and anterior femoral tunnel malpositioning in the setting of patella alta resulted in grossly elevated maximum patellofemoral contact pressures as compared to other scenarios. Patella alta after MPFL reconstruction results in decreased lateral restraining forces and patellofemoral contact area and increased maximum patellofemoral contact pressures. When the femoral MPFL tunnel is malpositioned anteriorly or distally on the femur, the maximum patellofemoral contact pressures increase with severity of patella alta. When evaluating patients with patellofemoral instability, it is important to recognize patella alta as a potential aggravating factor. Failure to address patella alta in the setting of MPFL femoral tunnel malposition may result in even further increases in patellofemoral contact pressures, making it essential to optimize intraoperative techniques to confirm anatomic MPFL femoral tunnel positioning.

  14. Finite Element Analysis of Patella Alta: A Patellofemoral Instability Model

    PubMed Central

    Duchman, Kyle R.; Grosland, Nicole M.; Bollier, Matthew J.

    2017-01-01

    Abstract Background: This study aims to provide biomechanical data on the effect of patella height in the setting of medial patellofemoral ligament (MPFL) reconstruction using finite element analysis. The study will also examine patellofemoral joint biomechanics using variable femoral insertion sites for MPFL reconstruction. Methods: A previously validated finite element knee model was modified to study patella alta and baja by translating the patella a given distance to achieve each patella height ratio. Additionally, the models were modified to study various femoral insertion sites of the MPFL (anatomic, anterior, proximal, and distal) for each patella height model, resulting in 32 unique scenarios available for investigation. Results: In the setting of patella alta, the patellofemoral contact area decreased, resulting in a subsequent increase in maximum patellofemoral contact pressures as compared to the scenarios with normal patellar height. Additionally, patella alta resulted in decreased lateral restraining forces in the native knee scenario as well as following MPFL reconstruction. Changing femoral insertion sites had a variable effect on patellofemoral contact pressures; however, distal and anterior femoral tunnel malpositioning in the setting of patella alta resulted in grossly elevated maximum patellofemoral contact pressures as compared to other scenarios. Conclusions: Patella alta after MPFL reconstruction results in decreased lateral restraining forces and patellofemoral contact area and increased maximum patellofemoral contact pressures. When the femoral MPFL tunnel is malpositioned anteriorly or distally on the femur, the maximum patellofemoral contact pressures increase with severity of patella alta. Clinical Relevance: When evaluating patients with patellofemoral instability, it is important to recognize patella alta as a potential aggravating factor. Failure to address patella alta in the setting of MPFL femoral tunnel malposition may result in even further increases in patellofemoral contact pressures, making it essential to optimize intraoperative techniques to confirm anatomic MPFL femoral tunnel positioning. PMID:28852343

  15. Surface sampling techniques for 3D object inspection

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong S.; Gerhardt, Lester A.

    1995-03-01

    While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.

  16. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Maolin; Du, R.

    2005-08-01

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  17. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  18. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  19. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  20. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    PubMed

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

Top