Sample records for multivariate bernoulli distribution

  1. A randomised approach for NARX model identification based on a multivariate Bernoulli distribution

    NASA Astrophysics Data System (ADS)

    Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.

    2017-04-01

    The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.

  2. Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1

    DTIC Science & Technology

    2012-10-17

    nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the

  3. Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.

    PubMed

    Imamverdiyev, Yadigar; Abdullayeva, Fargana

    2018-06-01

    In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.

  4. Dynamic probability control limits for risk-adjusted Bernoulli CUSUM charts.

    PubMed

    Zhang, Xiang; Woodall, William H

    2015-11-10

    The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, the use of a fixed control limit for the chart leads to a quite variable in-control average run length performance for patient populations with different risk score distributions. To overcome this problem, we determine simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, our risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Our simulation results demonstrate that our method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    PubMed

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  6. Effect of elastic boundaries in hydrostatic problems

    NASA Astrophysics Data System (ADS)

    Volobuev, A. N.; Tolstonogov, A. P.

    2010-03-01

    The possibility and conditions of use of the Bernoulli equation for description of an elastic pipeline were considered. It is shown that this equation is identical in form to the Bernoulli equation used for description of a rigid pipeline. It has been established that the static pressure entering into the Bernoulli equation is not identical to the pressure entering into the impulse-momentum equation. The hydrostatic problem on the pressure distribution over the height of a beaker with a rigid bottom and elastic walls, filled with a liquid, was solved.

  7. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  8. Local Stretching Theories

    DTIC Science & Technology

    2010-06-24

    diffusivity of the scalar. (If the scalar is heat, then the Schmidt number becomes the Prandtl number.) Momentum diffuses significantly faster than the...derive the Cramér function explicitly in the simple case where the xi have a Bernoulli distribution, though the general formula for S may be derived by...an analogous procedure. 5 Large deviation CLT for the Bernoulli distribution Let xi have the PDF of a fair coin, p(xi) = 1 2δ(xi + 1) + 1 2δ(xi − 1

  9. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  10. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  11. Beyond Bernoulli

    PubMed Central

    Donati, Fabrizio; Myerson, Saul; Bissell, Malenka M.; Smith, Nicolas P.; Neubauer, Stefan; Monaghan, Mark J.; Nordsletten, David A.

    2017-01-01

    Background— Transvalvular peak pressure drops are routinely assessed noninvasively by echocardiography using the Bernoulli principle. However, the Bernoulli principle relies on several approximations that may not be appropriate, including that the majority of the pressure drop is because of the spatial acceleration of the blood flow, and the ejection jet is a single streamline (single peak velocity value). Methods and Results— We assessed the accuracy of the Bernoulli principle to estimate the peak pressure drop at the aortic valve using 3-dimensional cardiovascular magnetic resonance flow data in 32 subjects. Reference pressure drops were computed from the flow field, accounting for the principles of physics (ie, the Navier–Stokes equations). Analysis of the pressure components confirmed that the spatial acceleration of the blood jet through the valve is most significant (accounting for 99% of the total drop in stenotic subjects). However, the Bernoulli formulation demonstrated a consistent overestimation of the transvalvular pressure (average of 54%, range 5%–136%) resulting from the use of a single peak velocity value, which neglects the velocity distribution across the aortic valve plane. This assumption was a source of uncontrolled variability. Conclusions— The application of the Bernoulli formulation results in a clinically significant overestimation of peak pressure drops because of approximation of blood flow as a single streamline. A corrected formulation that accounts for the cross-sectional profile of the blood flow is proposed and adapted to both cardiovascular magnetic resonance and echocardiographic data. PMID:28093412

  12. A Revelation: Quantum-Statistics and Classical-Statistics are Analytic-Geometry Conic-Sections and Numbers/Functions: Euler, Riemann, Bernoulli Generating-Functions: Conics to Numbers/Functions Deep Subtle Connections

    NASA Astrophysics Data System (ADS)

    Descartes, R.; Rota, G.-C.; Euler, L.; Bernoulli, J. D.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Quantum-statistics Dichotomy: Fermi-Dirac(FDQS) Versus Bose-Einstein(BEQS), respectively with contact-repulsion/non-condensation(FDCR) versus attraction/ condensationBEC are manifestly-demonstrated by Taylor-expansion ONLY of their denominator exponential, identified BOTH as Descartes analytic-geometry conic-sections, FDQS as Elllipse (homotopy to rectangle FDQS distribution-function), VIA Maxwell-Boltzmann classical-statistics(MBCS) to Parabola MORPHISM, VS. BEQS to Hyperbola, Archimedes' HYPERBOLICITY INEVITABILITY, and as well generating-functions[Abramowitz-Stegun, Handbook Math.-Functions--p. 804!!!], respectively of Euler-numbers/functions, (via Riemann zeta-function(domination of quantum-statistics: [Pathria, Statistical-Mechanics; Huang, Statistical-Mechanics]) VS. Bernoulli-numbers/ functions. Much can be learned about statistical-physics from Euler-numbers/functions via Riemann zeta-function(s) VS. Bernoulli-numbers/functions [Conway-Guy, Book of Numbers] and about Euler-numbers/functions, via Riemann zeta-function(s) MORPHISM, VS. Bernoulli-numbers/ functions, visa versa!!! Ex.: Riemann-hypothesis PHYSICS proof PARTLY as BEQS BEC/BEA!!!

  13. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  14. The stochastic model for ternary and quaternary alloys: Application of the Bernoulli relation to the phonon spectra of mixed crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchewka, M., E-mail: marmi@ur.edu.pl; Woźny, M.; Polit, J.

    2014-03-21

    To understand and interpret the experimental data on the phonon spectra of the solid solutions, it is necessary to describe mathematically the non-regular distribution of atoms in their lattices. It appears that such description is possible in case of the strongly stochastically homogenous distribution which requires a great number of atoms and very carefully mixed alloys. These conditions are generally fulfilled in case of high quality homogenous semiconductor solid solutions of the III–V and II–VI semiconductor compounds. In this case, we can use the Bernoulli relation describing probability of the occurrence of one n equivalent event which can be applied,more » to the probability of finding one from n configurations in the solid solution lattice. The results described in this paper for ternary HgCdTe and GaAsP as well as quaternary ZnCdHgTe can provide an affirmative answer to the question: whether stochastic geometry, e.g., the Bernoulli relation, is enough to describe the observed phonon spectra.« less

  15. The stochastic model for ternary and quaternary alloys: Application of the Bernoulli relation to the phonon spectra of mixed crystals

    NASA Astrophysics Data System (ADS)

    Marchewka, M.; Woźny, M.; Polit, J.; Kisiel, A.; Robouch, B. V.; Marcelli, A.; Sheregii, E. M.

    2014-03-01

    To understand and interpret the experimental data on the phonon spectra of the solid solutions, it is necessary to describe mathematically the non-regular distribution of atoms in their lattices. It appears that such description is possible in case of the strongly stochastically homogenous distribution which requires a great number of atoms and very carefully mixed alloys. These conditions are generally fulfilled in case of high quality homogenous semiconductor solid solutions of the III-V and II-VI semiconductor compounds. In this case, we can use the Bernoulli relation describing probability of the occurrence of one n equivalent event which can be applied, to the probability of finding one from n configurations in the solid solution lattice. The results described in this paper for ternary HgCdTe and GaAsP as well as quaternary ZnCdHgTe can provide an affirmative answer to the question: whether stochastic geometry, e.g., the Bernoulli relation, is enough to describe the observed phonon spectra.

  16. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  17. A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.

    PubMed

    Read, S; Bath, P A; Willett, P; Maheswaran, R

    2013-08-30

    The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Self-affirmation model for football goal distributions

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2007-06-01

    Analyzing football score data with statistical techniques, we investigate how the highly co-operative nature of the game is reflected in averaged properties such as the distributions of scored goals for the home and away teams. It turns out that in particular the tails of the distributions are not well described by independent Bernoulli trials, but rather well modeled by negative binomial or generalized extreme value distributions. To understand this behavior from first principles, we suggest to modify the Bernoulli random process to include a simple component of self-affirmation which seems to describe the data surprisingly well and allows to interpret the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments and found the proposed models to be applicable rather universally. In particular, here we compare men's and women's leagues and the separate German leagues during the cold war times and find some remarkable differences.

  19. Methods for the identification of material parameters in distributed models for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.; Rosen, I. G.

    1986-01-01

    Theoretical and numerical results are presented for inverse problems involving estimation of spatially varying parameters such as stiffness and damping in distributed models for elastic structures such as Euler-Bernoulli beams. An outline of algorithms used and a summary of computational experiences are presented.

  20. Normal-Gamma-Bernoulli Peak Detection for Analysis of Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry Data.

    PubMed

    Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang

    2017-01-01

    Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.

  1. Bernoulli's Principle

    ERIC Educational Resources Information Center

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  2. H∞ control for uncertain linear system over networks with Bernoulli data dropout and actuator saturation.

    PubMed

    Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping

    2018-03-01

    This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  4. Exploring the Sums of Powers of Consecutive q-Integers

    ERIC Educational Resources Information Center

    Kim, T.; Ryoo, C. S.; Jang, L. C.; Rim, S. H.

    2005-01-01

    The Bernoulli numbers are among the most interesting and important number sequences in mathematics. They first appeared in the posthumous work "Ars Conjectandi" (1713) by Jacob Bernoulli (1654-1705) in connection with sums of powers of consecutive integers (Bernoulli, 1713; or Smith, 1959). Bernoulli numbers are particularly important in number…

  5. Bridging the Gap Between Stationary Homogeneous Isotropic Turbulence and Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Sohrab, Siavash

    A statistical theory of stationary isotropic turbulence is presented with eddies possessing Gaussian velocity distribution, Maxwell-Boltzmann speed distribution in harmony with perceptions of Heisenberg, and Planck energy distribution in harmony with perceptions of Chandrasekhar and in agreement with experimental observations of Van Atta and Chen. Defining the action S = - mΦ in terms of velocity potential of atomic motion, scale-invariant Schrödinger equation is derivedfrom invariant Bernoulli equation. Thus, the gap between the problems of turbulence and quantum mechanics is closed through connections between Cauchy-Euler-Bernoulli equations of hydrodynamics, Hamilton-Jacobi equation of classical mechanics, and finally Schrödinger equation of quantum mechanics. Transitions of particle (molecular cluster cji) from a small rapidly-oscillating eddy ej (high-energy level-j) to a large slowly-oscillating eddy ei (low energy-level-i) leads to emission of a sub-particle (molecule mji) that carries away the excess energy ɛji = h (νj -νi) in harmony with Bohr theory of atomic spectra. ∖ ∖ NASA Grant No. NAG3-1863.

  6. Discriminative Bayesian Dictionary Learning for Classification.

    PubMed

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  7. Interpretation of Bernoulli's Equation.

    ERIC Educational Resources Information Center

    Bauman, Robert P.; Schwaneberg, Rolf

    1994-01-01

    Discusses Bernoulli's equation with regards to: horizontal flow of incompressible fluids, change of height of incompressible fluids, gases, liquids and gases, and viscous fluids. Provides an interpretation, properties, terminology, and applications of Bernoulli's equation. (MVL)

  8. Distributed Market-Based Algorithms for Multi-Agent Planning with Shared Resources

    DTIC Science & Technology

    2013-02-01

    1 Introduction 1 2 Distributed Market-Based Multi-Agent Planning 5 2.1 Problem Formulation...over the deterministic planner, on the “test set” of scenarios with changing economies. . . 50 xi xii Chapter 1 Introduction Multi-agent planning is...representation of the objective (4.2.1). For example, for the supply chain mangement problem, we assumed a sequence of Bernoulli coin flips, which seems

  9. Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.

    PubMed

    Yamazaki, Keisuke; Kaji, Daisuke

    2013-08-01

    Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  11. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  12. Bernoulli's Principle: Science as a Human Endeavor

    ERIC Educational Resources Information Center

    McCarthy, Deborah

    2008-01-01

    What do the ideas of Daniel Bernoulli--an 18th-century Swiss mathematician, physicist, natural scientist, and professor--and your students' next landing of the space shuttle via computer simulation have in common? Because of his contribution, referred in physical science as Bernoulli's principle, modern flight is possible. The mini learning-cycle…

  13. The Counter-Intuitive Non-Informative Prior for the Bernoulli Family

    ERIC Educational Resources Information Center

    Zhu, Mu; Lu, Arthur Y.

    2004-01-01

    In Bayesian statistics, the choice of the prior distribution is often controversial. Different rules for selecting priors have been suggested in the literature, which, sometimes, produce priors that are difficult for the students to understand intuitively. In this article, we use a simple heuristic to illustrate to the students the rather…

  14. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  15. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A Short History of Probability Theory and Its Applications

    ERIC Educational Resources Information Center

    Debnath, Lokenath; Basu, Kanadpriya

    2015-01-01

    This paper deals with a brief history of probability theory and its applications to Jacob Bernoulli's famous law of large numbers and theory of errors in observations or measurements. Included are the major contributions of Jacob Bernoulli and Laplace. It is written to pay the tricentennial tribute to Jacob Bernoulli, since the year 2013…

  17. Linear stochastic Schrödinger equations in terms of quantum Bernoulli noises

    NASA Astrophysics Data System (ADS)

    Chen, Jinshu; Wang, Caishi

    2017-05-01

    Quantum Bernoulli noises (QBN) are the family of annihilation and creation operators acting on Bernoulli functionals, which satisfy a canonical anti-commutation relation. In this paper, we study linear stochastic Schrödinger equations (LSSEs) associated with QBN in the space of square integrable complex-valued Bernoulli functionals. We first rigorously prove a formula concerning the number operator N on Bernoulli functionals. And then, by using this formula as well as Mora and Rebolledo's results on a general LSSE [C. M. Mora and R. Rebolledo, Infinite. Dimens. Anal. Quantum Probab. Relat. Top. 10, 237-259 (2007)], we obtain an easily checking condition for a LSSE associated with QBN to have a unique Nr-strong solution of mean square norm conservation for given r ≥0 . Finally, as an application of this condition, we examine a special class of LSSEs associated with QBN and some further results are proven.

  18. Who Solved the Bernoulli Differential Equation and How Did They Do It?

    ERIC Educational Resources Information Center

    Parker, Adam E.

    2013-01-01

    The Bernoulli brothers, Jacob and Johann, and Leibniz: Any of these might have been first to solve what is called the Bernoulli differential equation. We explore their ideas and the chronology of their work, finding out, among other things, that variation of parameters was used in 1697, 78 years before 1775, when Lagrange introduced it in general.

  19. Bernoulli in the operating room: from the perspective of a cardiac surgeon.

    PubMed

    Matt, Peter

    2014-12-01

    The Bernoullis were one of the most distinguished families in the history of science. It was Daniel Bernoulli who applied mathematical physics to medicine to further his understanding of physiological mechanisms that have an impact even in today's high-end medicine. His masterwork was the analysis of fluid dynamics, which resulted in Bernoulli's law. Most important for cardiac surgery, it describes how a centrifugal pump works within an extracorporeal circulation, lays the basis for measuring a gradient over a stenotic heart valve, and explains how to measure the transit time flow within a bypass graft. Georg Thieme Verlag KG Stuttgart · New York.

  20. Evaluation of aerodynamic characteristics of a coupled fluid-structure system using generalized Bernoulli's principle: An application to vocal folds vibration.

    PubMed

    Zhang, Lucy T; Yang, Jubiao

    2016-12-01

    In this work we explore the aerodynamics flow characteristics of a coupled fluid-structure interaction system using a generalized Bernoulli equation derived directly from the Cauchy momentum equations. Unlike the conventional Bernoulli equation where incompressible, inviscid, and steady flow conditions are assumed, this generalized Bernoulli equation includes the contributions from compressibility, viscous, and unsteadiness, which could be essential in defining aerodynamic characteristics. The application of the derived Bernoulli's principle is on a fully-coupled fluid-structure interaction simulation of the vocal folds vibration. The coupled system is simulated using the immersed finite element method where compressible Navier-Stokes equations are used to describe the air and an elastic pliable structure to describe the vocal fold. The vibration of the vocal fold works to open and close the glottal flow. The aerodynamics flow characteristics are evaluated using the derived Bernoulli's principles for a vibration cycle in a carefully partitioned control volume based on the moving structure. The results agree very well to experimental observations, which validate the strategy and its use in other types of flow characteristics that involve coupled fluid-structure interactions.

  1. Nonlinear earthquake analysis of reinforced concrete frames with fiber and Bernoulli-Euler beam-column element.

    PubMed

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.

  2. A new equilibrium torus solution and GRMHD initial conditions

    NASA Astrophysics Data System (ADS)

    Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh

    2013-11-01

    Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.

  3. Theoretical study on a Miniature Joule-Thomson & Bernoulli Cryocooler

    NASA Astrophysics Data System (ADS)

    Xiong, L. Y.; Kaiser, G.; Binneberg, A.

    2004-11-01

    In this paper, a microchannel-based cryocooler consisting of a compressor, a recuperator and a cold heat exchanger has been developed to study the feasibility of cryogenic cooling by the use of Joule-Thomson effect and Bernoulli effect. A set of governing equations including Bernoulli equations and energy equations are introduced and the performance of the cooler is calculated. The influences of some working conditions and structure parameters on the performance of coolers are discussed in details.

  4. Beltrami–Bernoulli equilibria in plasmas with degenerate electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berezhiani, V. I., E-mail: vazhab@yahoo.com; Shatashvili, N. L., E-mail: shatash@ictp.it; Mahajan, S. M., E-mail: mahajan@mail.utexas.edu

    2015-02-15

    A new class of Double Beltrami–Bernoulli equilibria, sustained by electron degeneracy pressure, is investigated. It is shown that due to electron degeneracy, a nontrivial Beltrami–Bernoulli equilibrium state is possible even for a zero temperature plasma. These states are, conceptually, studied to show the existence of new energy transformation pathways converting, for instance, the degeneracy energy into fluid kinetic energy. Such states may be of relevance to compact astrophysical objects like white dwarfs, neutron stars, etc.

  5. A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weerakkody, Sean; Ozel, Omur; Sinopoli, Bruno

    We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the othermore » hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.« less

  6. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  7. Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.

    PubMed

    Campos, Daniel G

    2018-02-01

    This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Quantum Markov semigroups constructed from quantum Bernoulli noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Caishi; Chen, Jinshu

    2016-02-15

    Quantum Bernoulli noises (QBNs) are the family of annihilation and creation operators acting on Bernoulli functionals, which can describe a two-level quantum system with infinitely many sites. In this paper, we consider the problem to construct quantum Markov semigroups (QMSs) directly from QBNs. We first establish several new theorems concerning QBNs. In particular, we define the number operator acting on Bernoulli functionals by using the canonical orthonormal basis, prove its self-adjoint property, and describe precisely its connections with QBN in a mathematically rigorous way. We then show the possibility to construct QMS directly from QBN. This is done by combiningmore » the general results on QMS with our new results on QBN obtained here. Finally, we examine some properties of QMS constructed from QBN.« less

  9. Fragility Analysis of a Concrete Gravity Dam Embedded in Rock and Its System Response Curve Computed by the Analytical Program GDLAD_Foundation

    DTIC Science & Technology

    2012-06-01

    According to the Bernoulli equation for ideal flows, i.e. steady, frictionless, incompressible flows, the total head, H, at any point can be determined...centerline and using the Bernoulli equation for ideal flow with an assumption that the velocity is small, the total head equals the pressure head...the Bernoulli equation for ideal flows, i.e. steady, frictionless, incompressible flows, the total head, H, at any point can be determined by

  10. Chaotic dynamics of flexible Euler-Bernoulli beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awrejcewicz, J., E-mail: awrejcew@p.lodz.pl; Krysko, A. V., E-mail: anton.krysko@gmail.com; Kutepov, I. E., E-mail: iekutepov@gmail.com

    2013-12-15

    Mathematical modeling and analysis of spatio-temporal chaotic dynamics of flexible simple and curved Euler-Bernoulli beams are carried out. The Kármán-type geometric non-linearity is considered. Algorithms reducing partial differential equations which govern the dynamics of studied objects and associated boundary value problems are reduced to the Cauchy problem through both Finite Difference Method with the approximation of O(c{sup 2}) and Finite Element Method. The obtained Cauchy problem is solved via the fourth and sixth-order Runge-Kutta methods. Validity and reliability of the results are rigorously discussed. Analysis of the chaotic dynamics of flexible Euler-Bernoulli beams for a series of boundary conditions ismore » carried out with the help of the qualitative theory of differential equations. We analyze time histories, phase and modal portraits, autocorrelation functions, the Poincaré and pseudo-Poincaré maps, signs of the first four Lyapunov exponents, as well as the compression factor of the phase volume of an attractor. A novel scenario of transition from periodicity to chaos is obtained, and a transition from chaos to hyper-chaos is illustrated. In particular, we study and explain the phenomenon of transition from symmetric to asymmetric vibrations. Vibration-type charts are given regarding two control parameters: amplitude q{sub 0} and frequency ω{sub p} of the uniformly distributed periodic excitation. Furthermore, we detected and illustrated how the so called temporal-space chaos is developed following the transition from regular to chaotic system dynamics.« less

  11. Nonlinear Earthquake Analysis of Reinforced Concrete Frames with Fiber and Bernoulli-Euler Beam-Column Element

    PubMed Central

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched. PMID:24578667

  12. Bernoulli, Darwin, and Sagan: the probability of life on other planets

    NASA Astrophysics Data System (ADS)

    Rossmo, D. Kim

    2017-04-01

    The recent discovery that billions of planets in the Milky Way Galaxy may be in circumstellar habitable zones has renewed speculation over the possibility of extraterrestrial life. The Drake equation is a probabilistic framework for estimating the number of technological advanced civilizations in our Galaxy; however, many of the equation's component probabilities are either unknown or have large error intervals. In this paper, a different method of examining this question is explored, one that replaces the various Drake factors with the single estimate for the probability of life existing on Earth. This relationship can be described by the binomial distribution if the presence of life on a given number of planets is equated to successes in a Bernoulli trial. The question of exoplanet life may then be reformulated as follows - given the probability of one or more independent successes for a given number of trials, what is the probability of two or more successes? Some of the implications of this approach for finding life on exoplanets are discussed.

  13. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    PubMed

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  14. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    PubMed Central

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.

    2017-01-01

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796

  15. Approximation techniques for parameter estimation and feedback control for distributed models of large flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1984-01-01

    Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.

  16. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  17. Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.

    PubMed

    Decker, Gifford Z; Thomson, Scott L

    2007-05-01

    The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.

  18. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  19. Proceedings of the Annual Symposium on Frequency Control (33rd) Held in Atlantic City, New Jersey on 30 May-1 June 1979

    DTIC Science & Technology

    1979-01-01

    from the Bernoullis was Daniel Bernoulli’s n’est pas la meme dans tous les sens", Exercices addition of the acceleration term to the beam e- de Math...frequencies). improved during 1811-1816 by Germain and Lagrange and, finally, the correct derivation was produced 1852 G. Lame, "Leqons sur la ...de la re- tropic membranes and plates (low frequencies) sistance des solides et des solides d’egale by Euler, Jacques Bernoulli, Germin, Lagrange

  20. An Illustration of the Bernoulli Effect With a Rubber Tube

    ERIC Educational Resources Information Center

    Hanson, M. J.

    1973-01-01

    Describes a simple method of demonstrating the Bernoulli effect, by spinning a length of rubber tubing around one's head. A manometer attached to the stationary end of the tube indicates a reduction in pressure. (JR)

  1. THE BERNOULLI EQUATION AND COMPRESSIBLE FLOW THEORIES

    EPA Science Inventory

    The incompressible Bernoulli equation is an analytical relationship between pressure, kinetic energy, and potential energy. As perhaps the simplest and most useful statement for describing laminar flow, it buttresses numerous incompressible flow models that have been developed ...

  2. Geometrical study of phyllotactic patterns by Bernoulli spiral lattices.

    PubMed

    Sushida, Takamichi; Yamagishi, Yoshikazu

    2017-06-01

    Geometrical studies of phyllotactic patterns deal with the centric or cylindrical models produced by ideal lattices. van Iterson (Mathematische und mikroskopisch - anatomische Studien über Blattstellungen nebst Betrachtungen über den Schalenbau der Miliolinen, Verlag von Gustav Fischer, Jena, 1907) suggested a centric model representing ideal phyllotactic patterns as disk packings of Bernoulli spiral lattices and presented a phase diagram now called Van Iterson's diagram explaining the bifurcation processes of their combinatorial structures. Geometrical properties on disk packings were shown by Rothen & Koch (J. Phys France, 50(13), 1603-1621, 1989). In contrast, as another centric model, we organized a mathematical framework of Voronoi tilings of Bernoulli spiral lattices and showed mathematically that the phase diagram of a Voronoi tiling is graph-theoretically dual to Van Iterson's diagram. This paper gives a review of two centric models for disk packings and Voronoi tilings of Bernoulli spiral lattices. © 2017 Japanese Society of Developmental Biologists.

  3. Hydraulic jump and Bernoulli equation in nonlinear shallow water model

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Yih

    2018-06-01

    A shallow water model was applied to study the hydraulic jump and Bernoulli equation across the jump. On a flat terrain, when a supercritical flow plunges into a subcritical flow, discontinuity develops on velocity and Bernoulli function across the jump. The shock generated by the obstacle may propagate downstream and upstream. The latter reflected from the inflow boundary, moves downstream and leaves the domain. Before the reflected wave reaching the obstacle, the short-term integration (i.e., quasi-steady) simulations agree with Houghton and Kasahara's results, which may have unphysical complex solutions. The quasi-steady flow is quickly disturbed by the reflected wave, finally, flow reaches steady and becomes critical without complex solutions. The results also indicate that Bernoulli function is discontinuous but the potential of mass flux remains constant across the jump. The latter can be used to predict velocity/height in a steady flow.

  4. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  5. Bernoulli's Principle Applied to Brain Fluids: Intracranial Pressure Does Not Drive Cerebral Perfusion or CSF Flow.

    PubMed

    Schmidt, Eric; Ros, Maxime; Moyse, Emmanuel; Lorthois, Sylvie; Swider, Pascal

    2016-01-01

    In line with the first law of thermodynamics, Bernoulli's principle states that the total energy in a fluid is the same at all points. We applied Bernoulli's principle to understand the relationship between intracranial pressure (ICP) and intracranial fluids. We analyzed simple fluid physics along a tube to describe the interplay between pressure and velocity. Bernoulli's equation demonstrates that a fluid does not flow along a gradient of pressure or velocity; a fluid flows along a gradient of energy from a high-energy region to a low-energy region. A fluid can even flow against a pressure gradient or a velocity gradient. Pressure and velocity represent part of the total energy. Cerebral blood perfusion is not driven by pressure but by energy: the blood flows from high-energy to lower-energy regions. Hydrocephalus is related to increased cerebrospinal fluid (CSF) resistance (i.e., energy transfer) at various points. Identification of the energy transfer within the CSF circuit is important in understanding and treating CSF-related disorders. Bernoulli's principle is not an abstract concept far from clinical practice. We should be aware that pressure is easy to measure, but it does not induce resumption of fluid flow. Even at the bedside, energy is the key to understanding ICP and fluid dynamics.

  6. Elementary Hemodynamic Principles Based on Modified Bernoulli's Equation.

    ERIC Educational Resources Information Center

    Badeer, Henry S.

    1985-01-01

    Develops and expands basic concepts of Bernoulli's equation as it applies to vascular hemodynamics. Simple models are used to illustrate gravitational potential energy, steady nonturbulent flow, pump-driven streamline flow, and other areas. Relationships to the circulatory system are also discussed. (DH)

  7. Bernoulli? Perhaps, but What about Viscosity?

    ERIC Educational Resources Information Center

    Eastwell, Peter

    2007-01-01

    Bernoulli's principle is being misunderstood and consequently misused. This paper clarifies the issues involved, hypothesises as to how this unfortunate situation has arisen, provides sound explanations for many everyday phenomena involving moving air, and makes associated recommendations for teaching the effects of moving fluids.

  8. Improved implementation of the risk-adjusted Bernoulli CUSUM chart to monitor surgical outcome quality.

    PubMed

    Keefe, Matthew J; Loda, Justin B; Elhabashy, Ahmad E; Woodall, William H

    2017-06-01

    The traditional implementation of the risk-adjusted Bernoulli cumulative sum (CUSUM) chart for monitoring surgical outcome quality requires waiting a pre-specified period of time after surgery before incorporating patient outcome information. We propose a simple but powerful implementation of the risk-adjusted Bernoulli CUSUM chart that incorporates outcome information as soon as it is available, rather than waiting a pre-specified period of time after surgery. A simulation study is presented that compares the performance of the traditional implementation of the risk-adjusted Bernoulli CUSUM chart to our improved implementation. We show that incorporating patient outcome information as soon as it is available leads to quicker detection of process deterioration. Deterioration of surgical performance could be detected much sooner using our proposed implementation, which could lead to the earlier identification of problems. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. Blade Pressure Distribution for a Moderately Loaded Propeller.

    DTIC Science & Technology

    1980-09-01

    lifting surface, ft 2 s chordwise location as fraction of chord length t time , sec t maximum thickness of blade, ft0 U free stream velocity, ft/sec (design...developed in Reference 1, it takes into account the quadratic form of the Bernoulli equation, since the pertubation velocities are some- times of the...normal derivatives at the loading and control point, respectively. It should be noted that the time factor has been eliminated from both sides of Eq. (3

  10. Simulating aggregates of bivalents in 2n = 40 mouse meiotic spermatocytes through inhomogeneous site percolation processes.

    PubMed

    Berríos, Soledad; López Fenner, Julio; Maignan, Aude

    2018-06-19

    We show that an inhomogeneous Bernoulli site percolation process running upon a fullerene's dual [Formula: see text] can be used for representing bivalents attached to the nuclear envelope in mouse Mus M. Domesticus 2n = 40 meiotic spermatocytes during pachytene. It is shown that the induced clustering generated by overlapping percolation domains correctly reproduces the probability distribution observed in the experiments (data) after fine tuning the parameters.

  11. Curve Balls, Airplane Wings, and Prairie Dog Holes.

    ERIC Educational Resources Information Center

    Barnes, George B.

    1984-01-01

    Describes activities involving Bernoulli's principle which allows students to experience the difference between knowledge and scientific understanding. Explanations for each of the activities (using such materials as wooden spools, straws, soda bottles and table tennis balls) and explanations of phenomena in terms of Bernoulli's are provided. (BC)

  12. Evaluating the Risks: A Bernoulli Process Model of HIV Infection and Risk Reduction.

    ERIC Educational Resources Information Center

    Pinkerton, Steven D.; Abramson, Paul R.

    1993-01-01

    A Bernoulli process model of human immunodeficiency virus (HIV) is used to evaluate infection risks associated with various sexual behaviors (condom use, abstinence, or monogamy). Results suggest that infection is best mitigated through measures that decrease infectivity, such as condom use. (SLD)

  13. Colonic transit time and pressure based on Bernoulli's principle.

    PubMed

    Uno, Yoshiharu

    2018-01-01

    Variations in the caliber of human large intestinal tract causes changes in pressure and the velocity of its contents, depending on flow volume, gravity, and density, which are all variables of Bernoulli's principle. Therefore, it was hypothesized that constipation and diarrhea can occur due to changes in the colonic transit time (CTT), according to Bernoulli's principle. In addition, it was hypothesized that high amplitude peristaltic contractions (HAPC), which are considered to be involved in defecation in healthy subjects, occur because of cecum pressure based on Bernoulli's principle. A virtual healthy model (VHM), a virtual constipation model and a virtual diarrhea model were set up. For each model, the CTT was decided according to the length of each part of the colon, and then calculating the velocity due to the cecum inflow volume. In the VHM, the pressure change was calculated, then its consistency with HAPC was verified. The CTT changed according to the difference between the cecum inflow volume and the caliber of the intestinal tract, and was inversely proportional to the cecum inflow volume. Compared with VHM, the CTT was prolonged in the virtual constipation model, and shortened in the virtual diarrhea model. The calculated pressure of the VHM and the gradient of the interlocked graph were similar to that of HAPC. The CTT and HAPC can be explained by Bernoulli's principle, and constipation and diarrhea may be fundamentally influenced by flow dynamics.

  14. Modelling of Safety Instrumented Systems by using Bernoulli trials: towards the notion of odds on for SIS failures analysis

    NASA Astrophysics Data System (ADS)

    Cauffriez, Laurent

    2017-01-01

    This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.

  15. On the continuity of the stationary state distribution of DPCM

    NASA Astrophysics Data System (ADS)

    Naraghi-Pour, Morteza; Neuhoff, David L.

    1990-03-01

    Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.

  16. Testing Bernoulli's Law

    ERIC Educational Resources Information Center

    Ivanov, Dragia; Nikolov, Stefan; Petrova, Hristina

    2014-01-01

    In this paper we present three different methods for testing Bernoulli's law that are different from the standard "tube with varying cross-section." They are all applicable to high-school level physics education, with varying levels of theoretical and experimental complexity, depending on students' skills, and may even be…

  17. Generalization of the Bernoulli ODE

    ERIC Educational Resources Information Center

    Azevedo, Douglas; Valentino, Michele C.

    2017-01-01

    In this note, we propose a generalization of the famous Bernoulli differential equation by introducing a class of nonlinear first-order ordinary differential equations (ODEs). We provide a family of solutions for this introduced class of ODEs and also we present some examples in order to illustrate the applications of our result.

  18. Energy efficiency analysis of the manipulation process by the industrial objects with the use of Bernoulli gripping devices

    NASA Astrophysics Data System (ADS)

    Savkiv, Volodymyr; Mykhailyshyn, Roman; Duchon, Frantisek; Mikhalishin, Mykhailo

    2017-11-01

    The article deals with the topical issue of reducing energy consumption for transportation of industrial objects. The energy efficiency of the process of objects manipulation with the use of the orientation optimization method while gripping with the help of different methods has been studied. The analysis of the influence of the constituent parts of inertial forces, that affect the object of manipulation, on the necessary force characteristics and energy consumption of Bernoulli gripping device has been proposed. The economic efficiency of the use of the optimal orientation of Bernoulli gripping device while transporting the object of manipulation in comparison to the transportation without re-orientation has been proved.

  19. Thinking About Bernoulli

    NASA Astrophysics Data System (ADS)

    Kamela, Martin

    2007-09-01

    One of the most fun demonstrations in a freshman mechanics class is the levitation of a ball in a steady air stream even when the jet is directed at an angle. This and other demonstrations are often used to argue for the validity of Bernoulli's principle. As cautioned by some authors,2-4 however, it is important to avoid making sweeping statements such as "high speed implies lower pressure" with respect to interpreting the popular demonstrations. In this paper I present a demonstration that can be used in conjunction with the discussion of Bernoulli's principle to encourage students to consider assumptions carefully. Specifically, it shows that a correlation of high speed with lower fluid pressure is not true in general.

  20. Gap Flows through Idealized Topography. Part I: Forcing by Large-Scale Winds in the Nonrotating Limit.

    NASA Astrophysics Data System (ADS)

    Gabersek, Sasa.; Durran, Dale R.

    2004-12-01

    Gap winds produced by a uniform airstream flowing over an isolated flat-top ridge cut by a straight narrow gap are investigated by numerical simulation. On the scale of the entire barrier, the proportion of the oncoming flow that passes through the gap is relatively independent of the nondimensional mountain height , even over that range of for which there is the previously documented transition from a “flow over the ridge” regime to a “flow around” regime.The kinematics and dynamics of the gap flow itself were investigated by examining mass and momentum budgets for control volumes at the entrance, central, and exit regions of the gap. These analyses suggest three basic behaviors: the linear regime (small ) in which there is essentially no enhancement of the gap flow; the mountain wave regime ( 1.5) in which vertical mass and momentum fluxes play a crucial role in creating very strong winds near the exit of the gap; and the upstream-blocking regime ( 5) in which lateral convergence generates the strongest winds near the entrance of the gap.Trajectory analysis of the flow in the strongest events, the mountain wave events, confirms the importance of net subsidence in creating high wind speeds. Neglect of vertical motion in applications of Bernoulli's equation to gap flows is shown to lead to unreasonable wind speed predictions whenever the temperature at the gap exit exceeds that at the gap entrance. The distribution of the Bernoulli function on an isentropic surface shows a correspondence between regions of high Bernoulli function and high wind speeds in the gap-exit jet similar to that previously documented for shallow-water flow.


  1. Classic Bernoulli's Principle Derivation and Its Working Hypotheses

    ERIC Educational Resources Information Center

    Marciotto, Edson R.

    2016-01-01

    The Bernoulli's principle states that the quantity p+ pgz + pv[superscript 2]/2 must be conserved in a streamtube if some conditions are matched, namely: steady and irrotational flow of an inviscid and incompressible fluid. In most physics textbooks this result is demonstrated invoking the energy conservation of a fluid material volume at two…

  2. Alternate Solution to Generalized Bernoulli Equations via an Integrating Factor: An Exact Differential Equation Approach

    ERIC Educational Resources Information Center

    Tisdell, C. C.

    2017-01-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…

  3. Two Identities for the Bernoulli-Euler Numbers

    ERIC Educational Resources Information Center

    Gauthier, N.

    2008-01-01

    Two identities for the Bernoulli and for the Euler numbers are derived. These identities involve two special cases of central combinatorial numbers. The approach is based on a set of differential identities for the powers of the secant. Generalizations of the Mittag-Leffler series for the secant are introduced and used to obtain closed-form…

  4. Thinking about Bernoulli

    ERIC Educational Resources Information Center

    Kamela, Martin

    2007-01-01

    One of the most fun demonstrations in a freshman mechanics class is the levitation of a ball in a steady air stream even when the jet is directed at an angle. This and other demonstrations are often used to argue for the validity of Bernoulli's principle. As cautioned by some authors, however, it is important to avoid making sweeping statements…

  5. The Bernoulli Equation in a Moving Reference Frame

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2011-01-01

    Unlike other standard equations in introductory classical mechanics, the Bernoulli equation is not Galilean invariant. The explanation is that, in a reference frame moving with respect to constrictions or obstacles, those surfaces do work on the fluid, constituting an extra term that needs to be included in the work-energy calculation. A…

  6. [Work, momentum and fatigue in the work of Daniel Bernoulli: toward the optimization of biological fact].

    PubMed

    Fonteneau, Yannick; Viard, Jérôme

    The concept of mechanical work is inherited from the concepts of potentia absoluta and men's work, both implemented in the section IX of Daniel Bernoulli's Hydrodynamica in 1738. Nonetheless, Bernoulli did not confuse these two entities: he defined a link from gender to species between the former, which is general, and the latter, which is organic. In addition, Bernoulli clearly distinguished between vis viva and potentia absoluta (or work). Their reciprocal conversions are rarely mentioned explicitly in this book, except once, in the section X of his work, from vis viva to work, and subordinated to the mediation of a machine, in a driving forces substitution problem. His attitude evolved significantly in a text in 1753, in which work and vis viva were unambiguously connected, while the concept of potentia absoluta was reduced to that of human work, and the expression itself was abandoned. It was then accepted that work can be converted into vis viva, but the opposite is true in only one case, the intra-organic one. It is the concept of fatigue, seen as an expenditure of animal spirits themselves conceived of as little tensed springs releasing vis viva, that allowed the conversion, never quantified and listed simply as a model, from vis viva to work. Thus, work may have ultimately appeared as a transitional state between two kinds of vis viva, of which the first is non-quantifiable. At the same time, the natural elements were discredited from any hint of profitable production. Only men and animals were able to work in the strict sense of the word. Nature, left to itself, does not work, according to Bernoulli. In spite of his wish to bring together rational mechanics and practical mechanics, one perceived in the work of Bernoulli the subsistence of a rarely crossed disjunction between practical and theoretical fields.

  7. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  8. Euler and His Contribution Number Theory

    ERIC Educational Resources Information Center

    Len, Amy; Scott, Paul

    2004-01-01

    Born in 1707, Leonhard Euler was the son of a Protestant minister from the vicinity of Basel, Switzerland. With the aim of pursuing a career in theology, Euler entered the University of Basel at the age of thirteen, where he was tutored in mathematics by Johann Bernoulli (of the famous Bernoulli family of mathematicians). He developed an interest…

  9. Flawed Applications of Bernoulli's Principle

    ERIC Educational Resources Information Center

    Koumaras, Panagiotis; Primerakis, Georgios

    2018-01-01

    One of the most popular demonstration experiments pertaining to Bernoulli's principle is the production of a water spray by using a vertical plastic straw immersed in a glass of water and a horizontal straw to blow air towards the top edge of the vertical one. A more general version of this phenomenon, appearing also in school physics problems, is…

  10. The solution of transcendental equations

    NASA Technical Reports Server (NTRS)

    Agrawal, K. M.; Outlaw, R.

    1973-01-01

    Some of the existing methods to globally approximate the roots of transcendental equations namely, Graeffe's method, are studied. Summation of the reciprocated roots, Whittaker-Bernoulli method, and the extension of Bernoulli's method via Koenig's theorem are presented. The Aitken's delta squared process is used to accelerate the convergence. Finally, the suitability of these methods is discussed in various cases.

  11. The effect of model uncertainty on some optimal routing problems

    NASA Technical Reports Server (NTRS)

    Mohanty, Bibhu; Cassandras, Christos G.

    1991-01-01

    The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.

  12. Distributed flexibility in inertial swimmers

    NASA Astrophysics Data System (ADS)

    Floryan, Daniel; Rowley, Clarence W.; Smits, Alexander J.

    2017-11-01

    To achieve fast and efficient swimming, the flexibility of the propulsive surfaces is an important feature. To better understand the effects of distributed flexibility (either through inhomogeneous material properties, varying geometry, or both) we consider the coupled solid and fluid mechanics of the problem. Here, we develop a simplified model of a flexible swimmer, using Euler-Bernoulli theory to describe the solid, Theodorsen's theory to describe the fluid, and a Blasius boundary layer to incorporate viscous effects. Our primary aims are to understand how distributed flexibility affects the thrust production and efficiency of a swimmer with imposed motion at its leading edge. In particular, we examine the modal shapes of the swimmer to gain physical insight into the observed trends. Supported under ONR MURI Grant N00014-14-1-0533, Program Manager Robert Brizzolara.

  13. Monitoring surgical and medical outcomes: the Bernoulli cumulative SUM chart. A novel application to assess clinical interventions

    PubMed Central

    Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A

    2005-01-01

    Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461

  14. Complementary Curves of Descent

    DTIC Science & Technology

    2012-11-16

    a lemniscate of Bernoulli . Alternatively, the wires can be tracks down which round objects undergo a rolling race. The level of presentation is...A common mechanics demonstration consists of racing cars or balls down tracks of various shapes and qualitatively or quantitatively measuring the...problem), which is self complementary. A striking example is a straight wire whose complement is a lemniscate of Bernoulli . Alternatively the wires can

  15. A Ritz approach for the static analysis of planar pantographic structures modeled with nonlinear Euler-Bernoulli beams

    NASA Astrophysics Data System (ADS)

    Andreaus, Ugo; Spagnuolo, Mario; Lekszycki, Tomasz; Eugster, Simon R.

    2018-04-01

    We present a finite element discrete model for pantographic lattices, based on a continuous Euler-Bernoulli beam for modeling the fibers composing the pantographic sheet. This model takes into account large displacements, rotations and deformations; the Euler-Bernoulli beam is described by using nonlinear interpolation functions, a Green-Lagrange strain for elongation and a curvature depending on elongation. On the basis of the introduced discrete model of a pantographic lattice, we perform some numerical simulations. We then compare the obtained results to an experimental BIAS extension test on a pantograph printed with polyamide PA2200. The pantographic structures involved in the numerical as well as in the experimental investigations are not proper fabrics: They are composed by just a few fibers for theoretically allowing the use of the Euler-Bernoulli beam theory in the description of the fibers. We compare the experiments to numerical simulations in which we allow the fibers to elastically slide one with respect to the other in correspondence of the interconnecting pivot. We present as result a very good agreement between the numerical simulation, based on the introduced model, and the experimental measures.

  16. Bernoulli Suction Effect on Soap Bubble Blowing?

    NASA Astrophysics Data System (ADS)

    Davidson, John; Ryu, Sangjin

    2015-11-01

    As a model system for thin-film bubble with two gas-liquid interfaces, we experimentally investigated the pinch-off of soap bubble blowing. Using the lab-built bubble blower and high-speed videography, we have found that the scaling law exponent of soap bubble pinch-off is 2/3, which is similar to that of soap film bridge. Because air flowed through the decreasing neck of soap film tube, we studied possible Bernoulli suction effect on soap bubble pinch-off by evaluating the Reynolds number of airflow. Image processing was utilized to calculate approximate volume of growing soap film tube and the volume flow rate of the airflow, and the Reynolds number was estimated to be 800-3200. This result suggests that soap bubbling may involve the Bernoulli suction effect.

  17. Ergodic properties of the multidimensional rayleigh gas with a semipermeable barrier

    NASA Astrophysics Data System (ADS)

    Erdős, L.; Tuyen, D. Q.

    1990-06-01

    We consider a multidimensional system consisting of a particle of mass M and radius r (molecule), surrounded by an infinite ideal gas of point particles of mass m (atoms). The molecule is confined to the unit ball and interacts with its boundary ( barrier) via elastic collision, while the atoms are not affected by the boundary. We obtain convergence to equilibrium for the molecule from almost every initial distribution on its position and velocity. Furthermore, we prove that the infinite composite system of the molecule and the atoms is Bernoulli.

  18. Nonlinear oscillations of inviscid free drops

    NASA Technical Reports Server (NTRS)

    Patzek, T. W.; Benner, R. E., Jr.; Basaran, O. A.; Scriven, L. E.

    1991-01-01

    The present analysis of free liquid drops' inviscid oscillations proceeds through solution of Bernoulli's equation to obtain the free surface shape and of Laplace's equation for the velocity potential field. Results thus obtained encompass drop-shape sequences, pressure distributions, particle paths, and the temporal evolution of kinetic and surface energies; accuracy is verified by the near-constant drop volume and total energy, as well as the diminutiveness of mass and momentum fluxes across drop surfaces. Further insight into the nature of oscillations is provided by Fourier power spectrum analyses of mode interactions and frequency shifts.

  19. Vehicle - Bridge interaction, comparison of two computing models

    NASA Astrophysics Data System (ADS)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  20. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  1. Shape optimisation of an underwater Bernoulli gripper

    NASA Astrophysics Data System (ADS)

    Flint, Tim; Sellier, Mathieu

    2015-11-01

    In this work, we are interested in maximising the suction produced by an underwater Bernoulli gripper. Bernoulli grippers work by exploiting low pressure regions caused by the acceleration of a working fluid through a narrow channel, between the gripper and a surface, to provide a suction force. This mechanism allows for non-contact adhesion to various surfaces and may be used to hold a robot to the hull of a ship while it inspects welds for example. A Bernoulli type pressure analysis was used to model the system with a Darcy friction factor approximation to include the effects of frictional losses. The analysis involved a constrained optimisation in order to avoid cavitation within the mechanism which would result in decreased performance and damage to surfaces. A sensitivity based method and gradient descent approach was used to find the optimum shape of a discretised surface. The model's accuracy has been quantified against finite volume computational fluid dynamics simulation (ANSYS CFX) using the k- ω SST turbulence model. Preliminary results indicate significant improvement in suction force when compared to a simple geometry by retaining a pressure just above that at which cavitation would occur over as much surface area as possible. Doctoral candidate in the Mechanical Engineering Department of the University of Canterbury, New Zealand.

  2. Bending, longitudinal and torsional wave transmission on Euler-Bernoulli and Timoshenko beams with high propagation losses.

    PubMed

    Wang, X; Hopkins, C

    2016-10-01

    Advanced Statistical Energy Analysis (ASEA) is used to predict vibration transmission across coupled beams which support multiple wave types up to high frequencies where Timoshenko theory is valid. Bending-longitudinal and bending-torsional models are considered for an L-junction and rectangular beam frame. Comparisons are made with measurements, Finite Element Methods (FEM) and Statistical Energy Analysis (SEA). When beams support at least two local modes for each wave type in a frequency band and the modal overlap factor is at least 0.1, measurements and FEM have relatively smooth curves. Agreement between measurements, FEM, and ASEA demonstrates that ASEA is able to predict high propagation losses which are not accounted for with SEA. These propagation losses tend to become more important at high frequencies with relatively high internal loss factors and can occur when there is more than one wave type. At such high frequencies, Timoshenko theory, rather than Euler-Bernoulli theory, is often required. Timoshenko theory is incorporated in ASEA and SEA using wave theory transmission coefficients derived assuming Euler-Bernoulli theory, but using Timoshenko group velocity when calculating coupling loss factors. The changeover between theories is appropriate above the frequency where there is a 26% difference between Euler-Bernoulli and Timoshenko group velocities.

  3. Modeling and Control of Intelligent Flexible Structures

    DTIC Science & Technology

    1994-03-26

    can be approximated as a simply supported beam in transverse vibration. Assuming that the Euler- Bernoulli beam assumptions hold, linear equations of...The assumptions made during the derivation are that the element can be modeled as an Euler- Bernoulli beam, that the cross-section is symmetric, and...parametes A,. and ,%. andc input maces 3,,. The closed loop system. ecuation (7), is stable when the 3.. 8 and output gain mantices H1., H., H. for

  4. Flawed Applications of Bernoulli's Principle

    NASA Astrophysics Data System (ADS)

    Koumaras, Panagiotis; Primerakis, Georgios

    2018-04-01

    One of the most popular demonstration experiments pertaining to Bernoulli's principle is the production of a water spray by using a vertical plastic straw immersed in a glass of water and a horizontal straw to blow air towards the top edge of the vertical one. A more general version of this phenomenon, appearing also in school physics problems, is the determination of the rise of the water level h in the straw (see Fig. 1).

  5. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  6. Vibrations of an Euler-Bernoulli beam with hysteretic damping arising from dispersed frictional microcracks

    NASA Astrophysics Data System (ADS)

    Maiti, Soumyabrata; Bandyopadhyay, Ritwik; Chatterjee, Anindya

    2018-01-01

    We study free and harmonically forced vibrations of an Euler-Bernoulli beam with rate-independent hysteretic dissipation. The dissipation follows a model proposed elsewhere for materials with randomly dispersed frictional microcracks. The virtual work of distributed dissipative moments is approximated using Gaussian quadrature, yielding a few discrete internal hysteretic states. Lagrange's equations are obtained for the modal coordinates. Differential equations for the modal coordinates and internal states are integrated together. Free vibrations decay exponentially when a single mode dominates. With multiple modes active, higher modes initially decay rapidly while lower modes decay relatively slowly. Subsequently, lower modes show their own characteristic modal damping, while small amplitude higher modes show more erratic decay. Large dissipation, for the adopted model, leads mathematically to fast and damped oscillations in the limit, unlike viscously overdamped systems. Next, harmonically forced, lightly damped responses of the beam are studied using both a slow frequency sweep and a shooting-method based search for periodic solutions along with numerical continuation. Shooting method and frequency sweep results match for large ranges of frequency. The shooting method struggles near resonances, where internal states collapse into lower dimensional behavior and Newton-Raphson iterations fail. Near the primary resonances, simple numerically-aided harmonic balance gives excellent results. Insights are also obtained into the harmonic content of secondary resonances.

  7. The history and physics of heliox.

    PubMed

    Hess, Dean R; Fink, James B; Venkataraman, Shekhar T; Kim, In K; Myers, Timothy R; Tano, Benoit D

    2006-06-01

    Since the discovery of helium in 1868, it has found numerous applications in industry and medicine. Its low density makes helium potentially valuable in respiratory care applications, to reduce work of breathing, improve distribution of ventilation, reduce minute volume requirement, and improve aerosol delivery. This review includes a brief history of the use of heliox (a mixture of helium and oxygen) and addresses issues related to the physics of gas flow when heliox is used. Specifically covered are the Hagen-Poiseuille equation, laminar versus turbulent flow, the Reynolds number, orifice flow, Bernoulli's principle, Graham's law, wave speed, and thermal conductivity.

  8. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  9. Transient queue-size distribution in a finite-capacity queueing system with server breakdowns and Bernoulli feedback

    NASA Astrophysics Data System (ADS)

    Kempa, Wojciech M.

    2017-12-01

    A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.

  10. Track-before-detect labeled multi-bernoulli particle filter with label switching

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, Angel F.

    2016-10-01

    This paper presents a multitarget tracking particle filter (PF) for general track-before-detect measurement models. The PF is presented in the random finite set framework and uses a labelled multi-Bernoulli approximation. We also present a label switching improvement algorithm based on Markov chain Monte Carlo that is expected to increase filter performance if targets get in close proximity for a sufficiently long time. The PF is tested in two challenging numerical examples.

  11. The general solution to the classical problem of finite Euler Bernoulli beam

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Amba-Rao, C. L.

    1977-01-01

    An analytical solution is obtained for the problem of free and forced vibrations of a finite Euler Bernoulli beam with arbitrary (partially fixed) boundary conditions. The effects of linear viscous damping, Winkler foundation, constant axial tension, a concentrated mass, and an arbitrary forcing function are included in the analysis. No restriction is placed on the values of the parameters involved, and the solution presented here contains all cited previous solutions as special cases.

  12. A method for solution of the Euler-Bernoulli beam equation in flexible-link robotic systems

    NASA Technical Reports Server (NTRS)

    Tzes, Anthony P.; Yurkovich, Stephen; Langer, F. Dieter

    1989-01-01

    An efficient numerical method for solving the partial differential equation (PDE) governing the flexible manipulator control dynamics is presented. A finite-dimensional model of the equation is obtained through discretization in both time and space coordinates by using finite-difference approximations to the PDE. An expert program written in the Macsyma symbolic language is utilized in order to embed the boundary conditions into the program, accounting for a mass carried at the tip of the manipulator. The advantages of the proposed algorithm are many, including the ability to (1) include any distributed actuation term in the partial differential equation, (2) provide distributed sensing of the beam displacement, (3) easily modify the boundary conditions through an expert program, and (4) modify the structure for running under a multiprocessor environment.

  13. Spatiotemporal clusters of malaria cases at village level, northwest Ethiopia.

    PubMed

    Alemu, Kassahun; Worku, Alemayehu; Berhane, Yemane; Kumie, Abera

    2014-06-06

    Malaria attacks are not evenly distributed in space and time. In highland areas with low endemicity, malaria transmission is highly variable and malaria acquisition risk for individuals is unevenly distributed even within a neighbourhood. Characterizing the spatiotemporal distribution of malaria cases in high-altitude villages is necessary to prioritize the risk areas and facilitate interventions. Spatial scan statistics using the Bernoulli method were employed to identify spatial and temporal clusters of malaria in high-altitude villages. Daily malaria data were collected, using a passive surveillance system, from patients visiting local health facilities. Georeference data were collected at villages using hand-held global positioning system devices and linked to patient data. Bernoulli model using Bayesian approaches and Marcov Chain Monte Carlo (MCMC) methods were used to identify the effects of factors on spatial clusters of malaria cases. The deviance information criterion (DIC) was used to assess the goodness-of-fit of the different models. The smaller the DIC, the better the model fit. Malaria cases were clustered in both space and time in high-altitude villages. Spatial scan statistics identified a total of 56 spatial clusters of malaria in high-altitude villages. Of these, 39 were the most likely clusters (LLR = 15.62, p < 0.00001) and 17 were secondary clusters (LLR = 7.05, p < 0.03). The significant most likely temporal malaria clusters were detected between August and December (LLR = 17.87, p < 0.001). Travel away home, males and age above 15 years had statistically significant effect on malaria clusters at high-altitude villages. The study identified spatial clusters of malaria cases occurring at high elevation villages within the district. A patient who travelled away from home to a malaria-endemic area might be the most probable source of malaria infection in a high-altitude village. Malaria interventions in high altitude villages should address factors associated with malaria clustering.

  14. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  15. Collision Based Blood Cell Distribution of the Blood Flow

    NASA Astrophysics Data System (ADS)

    Cinar, Yildirim

    2003-11-01

    Introduction: The goal of the study is the determination of the energy transferring process between colliding masses and the application of the results to the distribution of the cell, velocity and kinetic energy in arterial blood flow. Methods: Mathematical methods and models were used to explain the collision between two moving systems, and the distribution of linear momentum, rectilinear velocity, and kinetic energy in a collision. Results: According to decrease of mass of the second system, the velocity and momentum of constant mass of the first system are decreased, and linearly decreasing mass of the second system captures a larger amount of the kinetic energy and the rectilinear velocity of the collision system on a logarithmic scale. Discussion: The cause of concentration of blood cells at the center of blood flow an artery is not explained by Bernoulli principle alone but the kinetic energy and velocity distribution due to collision between the big mass of the arterial wall and the small mass of blood cells must be considered as well.

  16. Recent Selected Papers of Northwestern Polytechnical University in Two Parts. Part I. 1979.

    DTIC Science & Technology

    1981-08-20

    pressure coefficient is calculated by the exact Bernoulli equation. Two numerical examples are included, and the results agree fairly well with known... Bernoulli equation is applied to cal- culate the pressure coefficient: ft. 2 2, Lq32) In the above expression, all the derivatives are calculated by...0.14: R Olt(6) The real part on the unit circle in ecuation (2) is used. Making use of equations (5) and (6), both sides of equation (2) are expanded

  17. Spatio-temporal scan statistics for the detection of outbreaks involving common molecular subtypes: using human cases of Escherichia coli O157:H7 provincial PFGE pattern 8 (National Designation ECXAI.0001) in Alberta as an example.

    PubMed

    So, H C; Pearl, D L; von Königslöw, T; Louie, M; Chui, L; Svenson, L W

    2013-08-01

    Molecular typing methods have become a common part of the surveillance of foodborne pathogens. In particular, pulsed-field gel electrophoresis (PFGE) has been used successfully to identify outbreaks of Escherichia coli O157:H7 in humans from a variety of food and environmental sources. However, some PFGE patterns appear commonly in surveillance systems, making it more difficult to distinguish between outbreak and sporadic cases based on molecular data alone. In addition, it is unknown whether these common patterns might have unique epidemiological characteristics reflected in their spatial and temporal distributions. Using E. coli O157:H7 surveillance data from Alberta, collected from 2000 to 2002, we investigated whether E. coli O157:H7 with provincial PFGE pattern 8 (national designation ECXAI.0001) clustered in space, time and space-time relative to other PFGE patterns using the spatial scan statistic. Based on our purely spatial and temporal scans using a Bernoulli model, there did not appear to be strong evidence that isolates of E. coli O157:H7 with provincial PFGE pattern 8 are distributed differently from other PFGE patterns. However, we did identify space-time clusters of isolates with PFGE pattern 8, using a Bernoulli model and a space-time permutation model, which included known outbreaks and potentially unrecognized outbreaks or additional outbreak cases. There were differences between the two models in the space-time clusters identified, which suggests that the use of both models could increase the sensitivity of a quantitative surveillance system for identifying outbreaks involving isolates sharing a common PFGE pattern. © 2012 Blackwell Verlag GmbH.

  18. Some Recent Developments on Complex Multivariate Distributions

    ERIC Educational Resources Information Center

    Krishnaiah, P. R.

    1976-01-01

    In this paper, the author gives a review of the literature on complex multivariate distributions. Some new results on these distributions are also given. Finally, the author discusses the applications of the complex multivariate distributions in the area of the inference on multiple time series. (Author)

  19. A new fractional nonlocal model and its application in free vibration of Timoshenko and Euler-Bernoulli beams

    NASA Astrophysics Data System (ADS)

    Rahimi, Zaher; Sumelka, Wojciech; Yang, Xiao-Jun

    2017-11-01

    The application of fractional calculus in fractional models (FMs) makes them more flexible than integer models inasmuch they can conclude all of integer and non-integer operators. In other words FMs let us use more potential of mathematics to modeling physical phenomena due to the use of both integer and fractional operators to present a better modeling of problems, which makes them more flexible and powerful. In the present work, a new fractional nonlocal model has been proposed, which has a simple form and can be used in different problems due to the simple form of numerical solutions. Then the model has been used to govern equations of the motion of the Timoshenko beam theory (TBT) and Euler-Bernoulli beam theory (EBT). Next, free vibration of the Timoshenko and Euler-Bernoulli simply-supported (S-S) beam has been investigated. The Galerkin weighted residual method has been used to solve the non-linear governing equations.

  20. Forecasting of foreign exchange rates of Taiwan’s major trading partners by novel nonlinear Grey Bernoulli model NGBM(1, 1)

    NASA Astrophysics Data System (ADS)

    Chen, Chun-I.; Chen, Hong Long; Chen, Shuo-Pei

    2008-08-01

    The traditional Grey Model is easy to understand and simple to calculate, with satisfactory accuracy, but it is also lack of flexibility to adjust the model to acquire higher forecasting precision. This research studies feasibility and effectiveness of a novel Grey model together with the concept of the Bernoulli differential equation in ordinary differential equation. In this research, the author names this newly proposed model as Nonlinear Grey Bernoulli Model (NGBM). The NGBM is nonlinear differential equation with power index n. By controlling n, the curvature of the solution curve could be adjusted to fit the result of one time accumulated generating operation (1-AGO) of raw data. One extreme case from Grey system textbook is studied by NGBM, and two published articles are chosen for practical tests of NGBM. The results prove the novel NGBM is feasible and efficient. Finally, NGBM is used to forecast 2005 foreign exchange rates of twelve Taiwan major trading partners, including Taiwan.

  1. "Astronomica" in the Correspondence between Leonhard Euler and Daniel Bernoull (German Title: "Astronomica" im Briefwechsel zwischen Leonhard Euler und Daniel Bernoulli)

    NASA Astrophysics Data System (ADS)

    Verdun, Andreas

    2010-12-01

    The Euler Commission of the Swiss Academy of Sciences intends to terminate the edition of Leonhard Euler's works in the next year 2011 after nearly one hundred years since the beginning of the editorial works. These works include, e.g., Volume 3 of the Series quarta A which will contain the correspondence between Leonhard Euler (1707-1783) and Daniel Bernoulli (1700-1783) and which is currently being edited by Dr. Emil A. Fellmann (Basel) and Prof. Dr. Gleb K. Mikhailov (Moscow). This correspondence contains more than hundred letters, principally from Daniel Bernoulli to Euler. Parts of this correspondence were published uncommented already in 1843. It is astonishing that, apart from mathematics and physics (mainly mechanics and hydrodynamics), many topics addressed concern astronomy. The major part of the preserved correspondence between Euler and Daniel Bernoulli, in which astronomical themes are discussed, concerns celestial mechanics as the dominant discipline of theoretical astronomy of the eighteenth century. It was triggered and coined mainly by the prize questions of the Paris Academy of Science. In more than two thirds of the letters current problems and questions concerning celestial mechanics of that time are treated, focusing on the lunar theory and the great inequality in the motions of Jupiter and Saturn as special applications of the three body problem. In the remaining letters, problems concerning spherical astronomy are solved and attempts are made to explain certain phenomena in the field of "cosmic physics" concerning astronomical observations.

  2. Numerical simulations of katabatic jumps in coats land, Antartica

    NASA Astrophysics Data System (ADS)

    Yu, Ye; Cai, Xiaoming; King, John C.; Renfrew, Ian A.

    A non-hydrostatic numerical model, the Regional Atmospheric Modeling System (RAMS), has been used to investigate the development of katabatic jumps in Coats Land, Antarctica. In the control run with a 5 m s-1downslope directed initial wind, a katabatic jump develops near the foot of the idealized slope. The jump is manifested as a rapid deceleration of the downslope flow and a change from supercritical to subcritical flow, in a hydraulic sense, i.e., the Froude number (Fr) of the flow changes from Fr > 1 to Fr> 1. Results from sensitivity experiments show that an increase in the upstream flow rate strengthens the jump, while an increase in the downstream inversion-layer depth results in a retreat of the jump. Hydraulic theory and Bernoulli''s theorem have been used to explain the surface pressure change across the jump. It is found that hydraulic theory always underestimates the surface pressure change, while Bernoulli''s theorem provides a satisfactory estimation. An analysis of the downs balance for the katabatic jump indicates that the important forces are those related to the pressure gradient, advection and, to a lesser extent, the turbulent momentum divergence. The development of katabatic jumps can be divided into two phases. In phase I, the t gradient force is nearly balanced by advection, while in phase II, the pressure gradient force is counterbalanced by turbulent momentum divergence. The upslope pressure gradient force associated with a pool of cold air over the ice shelf facilitates the formation of the katabatic jump.

  3. Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes

    NASA Astrophysics Data System (ADS)

    Farzan Sabahi, Mohammad; Dehghanfard, Ali

    2014-12-01

    The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.

  4. A new multivariate zero-adjusted Poisson model with applications to biomedicine.

    PubMed

    Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen

    2018-05-25

    Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  6. A Robust Bayesian Approach for Structural Equation Models with Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Xia, Ye-Mao

    2008-01-01

    In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…

  7. Noninvasive assessment of mitral inertness [correction of inertance]: clinical results with numerical model validation.

    PubMed

    Firstenberg, M S; Greenberg, N L; Smedira, N G; McCarthy, P M; Garcia, M J; Thomas, J D

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  8. Noninvasive assessment of mitral inertness: clinical results with numerical model validation

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Inertial forces (Mdv/dt) are a significant component of transmitral flow, but cannot be measured with Doppler echo. We validated a method of estimating Mdv/dt. Ten patients had a dual sensor transmitral (TM) catheter placed during cardiac surgery. Doppler and 2D echo was performed while acquiring LA and LV pressures. Mdv/dt was determined from the Bernoulli equation using Doppler velocities and TM gradients. Results were compared with numerical modeling. TM gradients (range: 1.04-14.24 mmHg) consisted of 74.0 +/- 11.0% inertial forcers (range: 0.6-12.9 mmHg). Multivariate analysis predicted Mdv/dt = -4.171(S/D (RATIO)) + 0.063(LAvolume-max) + 5. Using this equation, a strong relationship was obtained for the clinical dataset (y=0.98x - 0.045, r=0.90) and the results of numerical modeling (y=0.96x - 0.16, r=0.84). TM gradients are mainly inertial and, as validated by modeling, can be estimated with echocardiography.

  9. Mixed H2/H∞ distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements

    NASA Astrophysics Data System (ADS)

    Song, Yan; Fang, Xiaosheng; Diao, Qingda

    2016-03-01

    In this paper, we discuss the mixed H2/H∞ distributed robust model predictive control problem for polytopic uncertain systems subject to randomly occurring actuator saturation and packet loss. The global system is decomposed into several subsystems, and all the subsystems are connected by a fixed topology network, which is the definition for the packet loss among the subsystems. To better use the successfully transmitted information via Internet, both the phenomena of actuator saturation and packet loss resulting from the limitation of the communication bandwidth are taken into consideration. A novel distributed controller model is established to account for the actuator saturation and packet loss in a unified representation by using two sets of Bernoulli distributed white sequences with known conditional probabilities. With the nonlinear feedback control law represented by the convex hull of a group of linear feedback laws, the distributed controllers for subsystems are obtained by solving an linear matrix inequality (LMI) optimisation problem. Finally, numerical studies demonstrate the effectiveness of the proposed techniques.

  10. Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models

    DTIC Science & Technology

    2015-07-14

    2008). Sequential Monte Carlo smoothing with applica- tion to parameter estimation in non-linear state space models. Bernoulli , 14, 155-179. [22] Parikh...1BcΣ(θ?,δ)(Θ) ] = o ( τk ) for all k ∈ N. (45) The other integral is over the ball BΣ(θ?, δ), i.e. close to θ?; hence we perform a Taylor expansion of...1] R3 (θ, θ?) = ∑ |α|=4 ∂αϕ (θ? + cθ (θ − θ?)) (θ − θ?)α α! . 26 We now use the symmetry of the normal distribution N ( θ?, τ2Σ ) on the ball BΣ(θ

  11. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  12. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  13. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials.

    PubMed

    Mashayekhi, S; Razzaghi, M; Tripak, O

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  14. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials

    PubMed Central

    Mashayekhi, S.; Razzaghi, M.; Tripak, O.

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

  15. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  16. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  17. Comparison of Poisson and Bernoulli spatial cluster analyses of pediatric injuries in a fire district

    PubMed Central

    Warden, Craig R

    2008-01-01

    Background With limited resources available, injury prevention efforts need to be targeted both geographically and to specific populations. As part of a pediatric injury prevention project, data was obtained on all pediatric medical and injury incidents in a fire district to evaluate geographical clustering of pediatric injuries. This will be the first step in attempting to prevent these injuries with specific interventions depending on locations and mechanisms. Results There were a total of 4803 incidents involving patients less than 15 years of age that the fire district responded to during 2001–2005 of which 1997 were categorized as injuries and 2806 as medical calls. The two cohorts (injured versus medical) differed in age distribution (7.7 ± 4.4 years versus 5.4 ± 4.8 years, p < 0.001) and location type of incident (school or church 12% versus 15%, multifamily residence 22% versus 13%, single family residence 51% versus 28%, sport, park or recreational facility 3% versus 8%, public building 8% versus 7%, and street or road 3% versus 30%, respectively, p < 0.001). Using the medical incident locations as controls, there was no significant clustering for environmental or assault injuries using the Bernoulli method while there were four significant clusters for all injury mechanisms combined, 13 clusters for motor vehicle collisions, one for falls, and two for pedestrian or bicycle injuries. Using the Poisson cluster method on incidence rates by census tract identified four clusters for all injuries, three for motor vehicle collisions, four for fall injuries, and one each for environmental and assault injuries. The two detection methods shared a minority of overlapping geographical clusters. Conclusion Significant clustering occurs overall for all injury mechanisms combined and for each mechanism depending on the cluster detection method used. There was some overlap in geographic clusters identified by both methods. The Bernoulli method allows more focused cluster mapping and evaluation since it directly uses location data. Once clusters are found, interventions can be targeted to specific geographic locations, location types, ages of victims, and mechanisms of injury. PMID:18808720

  18. Comparison of Poisson and Bernoulli spatial cluster analyses of pediatric injuries in a fire district.

    PubMed

    Warden, Craig R

    2008-09-22

    With limited resources available, injury prevention efforts need to be targeted both geographically and to specific populations. As part of a pediatric injury prevention project, data was obtained on all pediatric medical and injury incidents in a fire district to evaluate geographical clustering of pediatric injuries. This will be the first step in attempting to prevent these injuries with specific interventions depending on locations and mechanisms. There were a total of 4803 incidents involving patients less than 15 years of age that the fire district responded to during 2001-2005 of which 1997 were categorized as injuries and 2806 as medical calls. The two cohorts (injured versus medical) differed in age distribution (7.7 +/- 4.4 years versus 5.4 +/- 4.8 years, p < 0.001) and location type of incident (school or church 12% versus 15%, multifamily residence 22% versus 13%, single family residence 51% versus 28%, sport, park or recreational facility 3% versus 8%, public building 8% versus 7%, and street or road 3% versus 30%, respectively, p < 0.001). Using the medical incident locations as controls, there was no significant clustering for environmental or assault injuries using the Bernoulli method while there were four significant clusters for all injury mechanisms combined, 13 clusters for motor vehicle collisions, one for falls, and two for pedestrian or bicycle injuries. Using the Poisson cluster method on incidence rates by census tract identified four clusters for all injuries, three for motor vehicle collisions, four for fall injuries, and one each for environmental and assault injuries. The two detection methods shared a minority of overlapping geographical clusters. Significant clustering occurs overall for all injury mechanisms combined and for each mechanism depending on the cluster detection method used. There was some overlap in geographic clusters identified by both methods. The Bernoulli method allows more focused cluster mapping and evaluation since it directly uses location data. Once clusters are found, interventions can be targeted to specific geographic locations, location types, ages of victims, and mechanisms of injury.

  19. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  20. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  1. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  2. On measures of association among genetic variables

    PubMed Central

    Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner

    2012-01-01

    Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500

  3. Vehicle lateral motion regulation under unreliable communication links based on robust H∞ output-feedback control schema

    NASA Astrophysics Data System (ADS)

    Li, Cong; Jing, Hui; Wang, Rongrong; Chen, Nan

    2018-05-01

    This paper presents a robust control schema for vehicle lateral motion regulation under unreliable communication links via controller area network (CAN). The communication links between the system plant and the controller are assumed to be imperfect and therefore the data packet dropouts occur frequently. The paper takes the form of parallel distributed compensation and treats the dropouts as random binary numbers that form Bernoulli distribution. Both of the tire cornering stiffness uncertainty and external disturbances are considered to enhance the robustness of the controller. In addition, a robust H∞ static output-feedback control approach is proposed to realize the lateral motion control with relative low cost sensors. The stochastic stability of the closed-loop system and conservation of the guaranteed H∞ performance are investigated. Simulation results based on CarSim platform using a high-fidelity and full-car model verify the effectiveness of the proposed control approach.

  4. Inverse problems in the modeling of vibrations of flexible beams

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Powers, R. K.; Rosen, I. G.

    1987-01-01

    The formulation and solution of inverse problems for the estimation of parameters which describe damping and other dynamic properties in distributed models for the vibration of flexible structures is considered. Motivated by a slewing beam experiment, the identification of a nonlinear velocity dependent term which models air drag damping in the Euler-Bernoulli equation is investigated. Galerkin techniques are used to generate finite dimensional approximations. Convergence estimates and numerical results are given. The modeling of, and related inverse problems for the dynamics of a high pressure hose line feeding a gas thruster actuator at the tip of a cantilevered beam are then considered. Approximation and convergence are discussed and numerical results involving experimental data are presented.

  5. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  6. Sonic-boom minimization.

    NASA Technical Reports Server (NTRS)

    Seebass, R.; George, A. R.

    1972-01-01

    There have been many attempts to reduce or eliminate the sonic boom. Such attempts fall into two categories: (1) aerodynamic minimization and (2) exotic configurations. In the first category changes in the entropy and the Bernoulli constant are neglected and equivalent body shapes required to minimize the overpressure, the shock pressure rise and the impulse are deduced. These results include the beneficial effects of atmospheric stratification. In the second category, the effective length of the aircraft is increased or its base area decreased by modifying the Bernoulli constant a significant fraction of the flow past the aircraft. A figure of merit is introduced which makes it possible to judge the effectiveness of the latter schemes.

  7. Evaluation of aerodynamic characteristics of a coupled fluid-structure system using generalized Bernoulli’s principle: An application to vocal folds vibration

    PubMed Central

    Zhang, Lucy T.; Yang, Jubiao

    2017-01-01

    In this work we explore the aerodynamics flow characteristics of a coupled fluid-structure interaction system using a generalized Bernoulli equation derived directly from the Cauchy momentum equations. Unlike the conventional Bernoulli equation where incompressible, inviscid, and steady flow conditions are assumed, this generalized Bernoulli equation includes the contributions from compressibility, viscous, and unsteadiness, which could be essential in defining aerodynamic characteristics. The application of the derived Bernoulli’s principle is on a fully-coupled fluid-structure interaction simulation of the vocal folds vibration. The coupled system is simulated using the immersed finite element method where compressible Navier-Stokes equations are used to describe the air and an elastic pliable structure to describe the vocal fold. The vibration of the vocal fold works to open and close the glottal flow. The aerodynamics flow characteristics are evaluated using the derived Bernoulli’s principles for a vibration cycle in a carefully partitioned control volume based on the moving structure. The results agree very well to experimental observations, which validate the strategy and its use in other types of flow characteristics that involve coupled fluid-structure interactions. PMID:29527541

  8. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  9. DichotomY IdentitY: Euler-Bernoulli Numbers, Sets-Multisets, FD-BE Quantum-Statistics, 1 /f0 - 1 /f1 Power-Spectra, Ellipse-Hyperbola Conic-Sections, Local-Global Extent: ``Category-Semantics''

    NASA Astrophysics Data System (ADS)

    Rota, G.-C.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Seminal Apostol[Math.Mag.81,3,178(08);Am.Math.Month.115,9,795(08)]-Rota[Intro.Prob. Thy.(95)-p.50-55] DichotomY equivalence-class: set-theory: sets V multisets; closed V open; to Abromowitz-Stegun[Hdbk.Math.Fns.(64)]-ch.23,p.803!]: numbers/polynomials generating-functions: Euler V Bernoulli; to Siegel[Schrodinger Cent.Symp.(87); Symp.Fractals, MRS Fall Mtg.,(1989)-5-papers!] power-spectrum: 1/ f {0}-White V 1/ f {1}-Zipf/Pink (Archimedes) HYPERBOLICITY INEVITABILITY; to analytic-geometry Conic-Sections: Ellipse V (via Parabola) V Hyperbola; to Extent/Scale/Radius: Locality V Globality, Root-Causes/Ultimate-Origins: Dimensionality: odd-Z V (via fractal) V even-Z, to Symmetries/(Noether's-theorem connected)/Conservation-Laws Dichotomy: restored/conservation/convergence=0- V broken/non-conservation/divergence=/=0: with asymptotic-limit antipodes morphisms/ crossovers: Eureka!!!; "FUZZYICS"=''CATEGORYICS''!!! Connection to Kummer(1850) Bernoulli-numbers proof of FLT is via Siegel(CCNY;1964) < (1994)[AMS Joint Mtg. (2002)-Abs.973-60-124] short succinct physics proof: FLT = Least-Action Principle!!!

  10. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  11. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  12. Multivariate stochastic simulation with subjective multivariate normal distributions

    Treesearch

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  13. Usual Dietary Intakes: SAS Macros for Fitting Multivariate Measurement Error Models & Estimating Multivariate Usual Intake Distributions

    Cancer.gov

    The following SAS macros can be used to create a multivariate usual intake distribution for multiple dietary components that are consumed nearly every day or episodically. A SAS macro for performing balanced repeated replication (BRR) variance estimation is also included.

  14. The sampled-data consensus of multi-agent systems with probabilistic time-varying delays and packet losses

    NASA Astrophysics Data System (ADS)

    Sui, Xin; Yang, Yongqing; Xu, Xianyun; Zhang, Shuai; Zhang, Lingzhong

    2018-02-01

    This paper investigates the consensus of multi-agent systems with probabilistic time-varying delays and packet losses via sampled-data control. On the one hand, a Bernoulli-distributed white sequence is employed to model random packet losses among agents. On the other hand, a switched system is used to describe packet dropouts in a deterministic way. Based on the special property of the Laplacian matrix, the consensus problem can be converted into a stabilization problem of a switched system with lower dimensions. Some mean square consensus criteria are derived in terms of constructing an appropriate Lyapunov function and using linear matrix inequalities (LMIs). Finally, two numerical examples are given to show the effectiveness of the proposed method.

  15. Space Flight Cable Model Development

    NASA Technical Reports Server (NTRS)

    Spak, Kaitlin

    2013-01-01

    This work concentrates the modeling efforts presented in last year's VSGC conference paper, "Model Development for Cable-Harnessed Beams." The focus is narrowed to modeling of space-flight cables only, as a reliable damped cable model is not yet readily available and is necessary to continue modeling cable-harnessed space structures. New experimental data is presented, eliminating the low-frequency noise that plagued the first year's efforts. The distributed transfer function method is applied to a single section of space flight cable for Euler-Bernoulli and shear beams. The work presented here will be developed into a damped cable model that can be incorporated into an interconnected beam-cable system. The overall goal of this work is to accurately predict natural frequencies and modal damping ratios for cabled space structures.

  16. Sign reversals of the output autocorrelation function for the stochastic Bernoulli-Verhulst equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumi, N., E-mail: Neeme.Lumi@tlu.ee; Mankin, R., E-mail: Romi.Mankin@tlu.ee

    2015-10-28

    We consider a stochastic Bernoulli-Verhulst equation as a model for population growth processes. The effect of fluctuating environment on the carrying capacity of a population is modeled as colored dichotomous noise. Relying on the composite master equation an explicit expression for the stationary autocorrelation function (ACF) of population sizes is found. On the basis of this expression a nonmonotonic decay of the ACF by increasing lag-time is shown. Moreover, in a certain regime of the noise parameters the ACF demonstrates anticorrelation as well as related sign reversals at some values of the lag-time. The conditions for the appearance of thismore » highly unexpected effect are also discussed.« less

  17. Theory of the Maxwell pressure tensor and the tension in a water bridge.

    PubMed

    Widom, A; Swain, J; Silverberg, J; Sivasubramanian, S; Srivastava, Y N

    2009-07-01

    A water bridge refers to an experimental "flexible cable" made up of pure de-ionized water, which can hang across two supports maintained with a sufficiently large voltage difference. The resulting electric fields within the de-ionized water flexible cable maintain a tension that sustains the water against the downward force of gravity. A detailed calculation of the water bridge tension will be provided in terms of the Maxwell pressure tensor in a dielectric fluid medium. General properties of the dielectric liquid pressure tensor are discussed along with unusual features of dielectric fluid Bernoulli flows in an electric field. The "frictionless" Bernoulli flow is closely analogous to that of a superfluid.

  18. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  19. The influence of inertia on the efflux velocity: From Daniel Bernoulli to a contemporary theory

    NASA Astrophysics Data System (ADS)

    Malcherek, Andreas

    2015-11-01

    In 1644 Evangelista Torricelli claimed that the outflow velocity from a vessel is equal to the terminal speed of a body falling freely from the filling level h, i.e. v =√{ 2 gh } . Therefore the largest velocities are predicted when the height in a vessel is at the highest position. As a consequence the efflux would start with the highest velocity directly from the initiation of motion which contradicts the inertia principle. In 1738 Daniel Bernoulli derived a much more sophisticated and instationary outflow theory basing on the conservation of potential and kinetic energy. As a special case Torricelli's law is obtained, when inertia is neglected and the cross section of the opening is small compared to the vessel's cross section. To the Authors knowledge, this theory was never applied or even mentioned in text books although it is superior to the Torricelli theory in many aspects. In this paper Bernoulli's forgotten theory will be presented. Deriving this theory using the state of the arts hydrodynamics results in a new formula v =√{ gh } . Although this formula contradicts Torricelli's principle, it is confirmed by all kind of experiments stating that a discharge coefficient of about β = 0 . 7 is needed in Torricelli's formula v = β√{ 2 gh } .

  20. On the Theory of Multivariate Elliptically Contoured Distributions and Their Applications.

    DTIC Science & Technology

    1982-05-01

    elliptically contoured distributions has been studied by several authors: Schoenberg (1938), Kelker (1970), Devlin, Gnanadesikan and Keltenring (1976...theory of ellip- tically contoured distributions, J. Multivariate Analysis, 11, 368-385. Devlin, S. J., Gnanadesikan , R., and Kettenring, J. R. (1976

  1. Distributed parameter modeling to prevent charge cancellation for discrete thickness piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Krishnasamy, M.; Qian, Feng; Zuo, Lei; Lenka, T. R.

    2018-03-01

    The charge cancellation due to the change of strain along single continuous piezoelectric layer can remarkably affect the performance of a cantilever based harvester. In this paper, analytical models using distributed parameters are developed with some extent of averting the charge cancellation in cantilever piezoelectric transducer where the piezoelectric layers are segmented at strain nodes of concerned vibration mode. The electrode of piezoelectric segments are parallelly connected with a single external resistive load in the 1st model (Model 1). While each bimorph piezoelectric layers are connected in parallel to a resistor to form an independent circuit in the 2nd model (Model 2). The analytical expressions of the closed-form electromechanical coupling responses in frequency domain under harmonic base excitation are derived based on the Euler-Bernoulli beam assumption for both models. The developed analytical models are validated by COMSOL and experimental results. The results demonstrate that the energy harvesting performance of the developed segmented piezoelectric layer models is better than the traditional model of continuous piezoelectric layer.

  2. Development of MCAERO wing design panel method with interactive graphics module

    NASA Technical Reports Server (NTRS)

    Hawk, J. D.; Bristow, D. R.

    1984-01-01

    A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.

  3. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    PubMed

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  4. Analytical study of sandwich structures using Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.

    2017-01-01

    This paper presents an analytical study of sandwich structures. In this study, the Euler-Bernoulli beam equation is solved analytically for a four-point bending problem. Appropriate initial and boundary conditions are specified to enclose the problem. In addition, the balance coefficient is calculated and the Rule of Mixtures is applied. The focus of this study is to determine the effective material properties and geometric features such as the moment of inertia of a sandwich beam. The effective parameters help in the development of a generic analytical correlation for complex sandwich structures from the perspective of four-point bending calculations. The main outcomes of these analytical calculations are the lateral displacements and longitudinal stresses for each particular material in the sandwich structure.

  5. Using Copula Distributions to Support More Accurate Imaging-Based Diagnostic Classifiers for Neuropsychiatric Disorders

    PubMed Central

    Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.

    2014-01-01

    Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging measures or their multivariate Gaussian distributions. Thus, our findings demonstrate that estimated multivariate Copula distributions can generate dense sets of brain imaging measures that can in turn be used to train classifiers, and those classifiers are significantly more accurate and more reproducible than are those generated using real-world imaging measures alone. PMID:25093634

  6. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  7. LATENT SPACE MODELS FOR MULTIVIEW NETWORK DATA

    PubMed Central

    Salter-Townshend, Michael; McCormick, Tyler H.

    2018-01-01

    Social relationships consist of interactions along multiple dimensions. In social networks, this means that individuals form multiple types of relationships with the same person (e.g., an individual will not trust all of his/her acquaintances). Statistical models for these data require understanding two related types of dependence structure: (i) structure within each relationship type, or network view, and (ii) the association between views. In this paper, we propose a statistical framework that parsimoniously represents dependence between relationship types while also maintaining enough flexibility to allow individuals to serve different roles in different relationship types. Our approach builds on work on latent space models for networks [see, e.g., J. Amer. Statist. Assoc. 97 (2002) 1090–1098]. These models represent the propensity for two individuals to form edges as conditionally independent given the distance between the individuals in an unobserved social space. Our work departs from previous work in this area by representing dependence structure between network views through a multivariate Bernoulli likelihood, providing a representation of between-view association. This approach infers correlations between views not explained by the latent space model. Using our method, we explore 6 multiview network structures across 75 villages in rural southern Karnataka, India [Banerjee et al. (2013)]. PMID:29721127

  8. LATENT SPACE MODELS FOR MULTIVIEW NETWORK DATA.

    PubMed

    Salter-Townshend, Michael; McCormick, Tyler H

    2017-09-01

    Social relationships consist of interactions along multiple dimensions. In social networks, this means that individuals form multiple types of relationships with the same person (e.g., an individual will not trust all of his/her acquaintances). Statistical models for these data require understanding two related types of dependence structure: (i) structure within each relationship type, or network view, and (ii) the association between views. In this paper, we propose a statistical framework that parsimoniously represents dependence between relationship types while also maintaining enough flexibility to allow individuals to serve different roles in different relationship types. Our approach builds on work on latent space models for networks [see, e.g., J. Amer. Statist. Assoc. 97 (2002) 1090-1098]. These models represent the propensity for two individuals to form edges as conditionally independent given the distance between the individuals in an unobserved social space. Our work departs from previous work in this area by representing dependence structure between network views through a multivariate Bernoulli likelihood, providing a representation of between-view association. This approach infers correlations between views not explained by the latent space model. Using our method, we explore 6 multiview network structures across 75 villages in rural southern Karnataka, India [Banerjee et al. (2013)].

  9. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  10. On the Propagation of Nonlinear Acoustic Waves in Viscous and Thermoviscous Fluids

    DTIC Science & Technology

    2012-01-01

    continuity and momentum equations, which in 1D reduce to ϱt + uϱx + ϱux = 0, (6) ϱ(ut + uux) = −℘x +  4 3 µ + µB  uxx, (7) respectively, recalling that all...1F ′ −  1 − v2n v3−2nn  F = ϵβF 2 (n = 0, 1), (14) i.e., the associated ODEs of the former and latter are Bernoulli equations. Integrating these...12), are of the Bernoulli type, namely, (Red)−1F ′ −  1 − v2n vn  F = ϵ  1 2 n +  β − 1 2 n  v2n  F 2, (20) which when integrated yield the

  11. Euler polynomials and identities for non-commutative operators

    NASA Astrophysics Data System (ADS)

    De Angelis, Valerio; Vignat, Christophe

    2015-12-01

    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.

  12. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  13. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  14. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  15. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Shapes of Bubbles and Drops in Motion.

    ERIC Educational Resources Information Center

    O'Connell, James

    2000-01-01

    Explains the shape distortions that take place in fluid packets (bubbles or drops) with steady flow motion by using the laws of Archimedes, Pascal, and Bernoulli rather than advanced vector calculus. (WRM)

  17. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  18. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  19. Event-triggered resilient filtering with stochastic uncertainties and successive packet dropouts via variance-constrained approach

    NASA Astrophysics Data System (ADS)

    Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.

    2018-07-01

    In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.

  20. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    NASA Astrophysics Data System (ADS)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  1. Bernoulli's Challenge

    NASA Astrophysics Data System (ADS)

    Bouffard, Karen

    1999-01-01

    This month's Olympic activity was brought to the Eastern Massachusetts Physics Olympics group by Ron DeFronzo of Pawtucket, Rhode Island. Using a hair dryer, contestants must maneuver a Ping-Pong ball into a three-dimensional "bullseye" target.

  2. Singing Corrugated Pipes.

    ERIC Educational Resources Information Center

    Cadwell, Louis H.

    1994-01-01

    This article describes different techniques used to measure air flow velocity. The two methods used were Crawford's Wastebasket and a video camera. The results were analyzed and compared to the air flow velocity predicted by Bernoulli's principle. (ZWH)

  3. Free vibration analysis of microtubules based on the molecular mechanics and continuum beam theory.

    PubMed

    Zhang, Jin; Wang, Chengyuan

    2016-10-01

    A molecular structural mechanics (MSM) method has been implemented to investigate the free vibration of microtubules (MTs). The emphasis is placed on the effects of the configuration and the imperfect boundaries of MTs. It is shown that the influence of protofilament number on the fundamental frequency is strong, while the effect of helix-start number is almost negligible. The fundamental frequency is also found to decrease as the number of the blocked filaments at boundaries decreases. Subsequently, the Euler-Bernoulli beam theory is employed to reveal the physics behind the simulation results. Fitting the Euler-Bernoulli beam into the MSM data leads to an explicit formula for the fundamental frequency of MTs with various configurations and identifies a possible correlation between the imperfect boundary conditions and the length-dependent bending stiffness of MTs reported in experiments.

  4. Nonlinear vocal fold dynamics resulting from asymmetric fluid loading on a two-mass model of speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron D.; Zañartu, Matías; Peterson, Sean D.; Plesniak, Michael W.

    2011-09-01

    Nonlinear vocal fold dynamics arising from asymmetric flow formations within the glottis are investigated using a two-mass model of speech with asymmetric vocal fold tensioning, representative of unilateral vocal fold paralysis. A refined theoretical boundary-layer flow solver is implemented to compute the intraglottal pressures, providing a more realistic description of the flow than the standard one-dimensional, inviscid Bernoulli flow solution. Vocal fold dynamics are investigated for subglottal pressures of 0.6 < ps < 1.5 kPa and tension asymmetries of 0.5 < Q < 0.8. As tension asymmetries become pronounced the asymmetric flow incites nonlinear behavior in the vocal fold dynamics at subglottal pressures that are associated with normal speech, behavior that is not captured with standard Bernoulli flow solvers. Regions of bifurcation, coexistence of solutions, and chaos are identified.

  5. Numerical solutions of incompressible Navier-Stokes equations using modified Bernoulli's law

    NASA Astrophysics Data System (ADS)

    Shatalov, A.; Hafez, M.

    2003-11-01

    Simulations of incompressible flows are important for many practical applications in aeronautics and beyond, particularly in the high Reynolds number regime. The present formulation is based on Helmholtz velocity decomposition where the velocity is presented as the gradient of a potential plus a rotational component. Substituting in the continuity equation yields a Poisson equation for the potential which is solved with a zero normal derivative at solid surfaces. The momentum equation is used to update the rotational component with no slip/no penetration surface boundary conditions. The pressure is related to the potential function through a special relation which is a generalization of Bernoulli's law, with a viscous term included. Results of calculations for two- and three-dimensional problems prove that the present formulation is a valid approach, with some possible benefits compared to existing methods.

  6. Caught in the Draft

    NASA Astrophysics Data System (ADS)

    Edge, Ron

    2007-09-01

    We've all seen (in movies, newscasts, or perhaps in person) the violent effect of the downwash that occurs when a helicopter hovers over the ground. Leaves, grass, and debris are dramatically blown about. We've also sat in front of circulating room fans and felt a large draft, whereas there seems to be very little air movement behind the fan. The cause of this is a delightful manifestation of Bernoulli's principle. The fan blades, or helicopter rotor blades, produce a pressure differential as air passes through them—let us say p1 before and p2 after, as shown in Fig. 1, with p2 greater than p1. If p0 is the ambient pressure, Bernoulli's equation gives p0=p1 +(1/2)ρv12, where v1 is the velocity of the air entering the fan. Continuity requires that v2 leaving the fan must equal v1 entering the fan for an incompressible fluid, approximately true here (Av1 = Av2, where A is the area swept out by the blades, the "rotor disk area"). However, some distance below the rotor (or in front of the fan) the velocity is vd (vdowndraft in the figure) and the pressure again p0, so Bernoulli gives us p2 + (1/2)ρv22 = (p1 + Δp) + (1/2) ρv12 = [p1 + (p2 - p1)] +(1/2) ρv12 = p2 + (1/2)ρv12 = p0 + (1/2) ρvd2.

  7. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  8. Asymptotic Distribution of the Likelihood Ratio Test Statistic for Sphericity of Complex Multivariate Normal Distribution.

    DTIC Science & Technology

    1981-08-01

    RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli

  9. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  10. A framework for multivariate data-based at-site flood frequency analysis: Essentiality of the conjugal application of parametric and nonparametric approaches

    NASA Astrophysics Data System (ADS)

    Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar

    2015-06-01

    In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.

  11. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-02-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  12. Nonlinear dynamic analysis of cantilevered piezoelectric energy harvesters under simultaneous parametric and external excitations

    NASA Astrophysics Data System (ADS)

    Fang, Fei; Xia, Guanghui; Wang, Jianguo

    2018-06-01

    The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.

  13. Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints.

    PubMed

    Smith, Adam L; Villar, Sofía S

    2018-01-01

    Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.

  14. Go Fly a Tetrahedron!

    ERIC Educational Resources Information Center

    Cowens, John

    1995-01-01

    Describes a science unit used in a fourth-grade class to teach students about Bernoulli's law of flight, the similarity of tetrahedrons to birds, and the construction of tetrahedron kites. Also includes thought-provoking math questions for students. (MDM)

  15. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  16. Head and facial anthropometry of mixed-race US Army male soldiers for military design and sizing: a pilot study.

    PubMed

    Yokota, Miyo

    2005-05-01

    In the United States, the biologically admixed population is increasing. Such demographic changes may affect the distribution of anthropometric characteristics, which are incorporated into the design of equipment and clothing for the US Army and other large organizations. The purpose of this study was to examine multivariate craniofacial anthropometric distributions between biologically admixed male populations and single racial groups of Black and White males. Multivariate statistical results suggested that nose breadth and lip length were different between Blacks and Whites. Such differences may be considered for adjustments to respirators and chemical-biological protective masks. However, based on this pilot study, multivariate anthropometric distributions of admixed individuals were within the distributions of single racial groups. Based on the sample reported, sizing and designing for the admixed groups are not necessary if anthropometric distributions of single racial groups comprising admixed groups are known.

  17. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Science Notes.

    ERIC Educational Resources Information Center

    Shaw, G. W.; And Others

    1989-01-01

    Provides a reading list for A- and S-level biology. Contains several experiments and demonstrations with topics on: the intestine, bullock corneal cells, valences, the science of tea, automated hydrolysis, electronics characteristics, bromine diffusion, enthalpy of vaporization determination, thermometers, pendulums, hovercraft, Bernoulli fluid…

  19. Alternate solution to generalized Bernoulli equations via an integrating factor: an exact differential equation approach

    NASA Astrophysics Data System (ADS)

    Tisdell, C. C.

    2017-08-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.

  20. Dynamics of 3D Timoshenko gyroelastic beams with large attitude changes for the gyros

    NASA Astrophysics Data System (ADS)

    Hassanpour, Soroosh; Heppler, G. R.

    2016-01-01

    This work is concerned with the theoretical development of dynamic equations for undamped gyroelastic beams which are dynamic systems with continuous inertia, elasticity, and gyricity. Assuming unrestricted or large attitude changes for the axes of the gyros and utilizing generalized Hooke's law, Duleau torsion theory, and Timoshenko bending theory, the energy expressions and equations of motion for the gyroelastic beams in three-dimensional space are derived. The so-obtained comprehensive gyroelastic beam model is compared against earlier gyroelastic beam models developed using Euler-Bernoulli beam models and is used to study the dynamics of gyroelastic beams through numerical examples. It is shown that there are significant differences between the developed unrestricted Timoshenko gyroelastic beam model and the previously derived zero-order restricted Euler-Bernoulli gyroelastic beam models. These differences are more pronounced in the short beam and transverse gyricity cases.

  1. Bernoulli substitution in the Ramsey model: Optimal trajectories under control constraints

    NASA Astrophysics Data System (ADS)

    Krasovskii, A. A.; Lebedev, P. D.; Tarasyev, A. M.

    2017-05-01

    We consider a neoclassical (economic) growth model. A nonlinear Ramsey equation, modeling capital dynamics, in the case of Cobb-Douglas production function is reduced to the linear differential equation via a Bernoulli substitution. This considerably facilitates the search for a solution to the optimal growth problem with logarithmic preferences. The study deals with solving the corresponding infinite horizon optimal control problem. We consider a vector field of the Hamiltonian system in the Pontryagin maximum principle, taking into account control constraints. We prove the existence of two alternative steady states, depending on the constraints. A proposed algorithm for constructing growth trajectories combines methods of open-loop control and closed-loop regulatory control. For some levels of constraints and initial conditions, a closed-form solution is obtained. We also demonstrate the impact of technological change on the economic equilibrium dynamics. Results are supported by computer calculations.

  2. Hydrodynamic pumping of a quantum Fermi liquid in a semiconductor heterostructure

    NASA Astrophysics Data System (ADS)

    Heremans, J. J.; Kantha, D.; Chen, H.; Govorov, A. O.

    2003-03-01

    We present experimental results for a pumping mechanism observed in mesoscopic structures patterned on two-dimensional electron systems in GaAs/AlGaAs heterostructures. The experiments are performed at low temperatures, in the ballistic regime. The effect is observed as a voltage or current signal corresponding to carrier extraction from sub-micron sized apertures, when these apertures are swept by a beam of ballistic electrons. The carrier extraction, phenomenologically reminiscent of the Bernoulli pumping effect in classical fluids, has been observed in various geometries. We ascertained linearity between measured voltage and injected current in all experiments, thereby excluding rectification effects. The linear response, however, points to a fundamental difference from the Bernoulli effect in classical liquids, where the response is nonlinear and quadratic in terms of the velocity. The temperature dependence of the effect will also be presented. We thank M. Shayegan (Princeton University) for the heterostructure growth, and acknowledge support from NSF DMR-0094055.

  3. Dynamic modelling and control of a rotating Euler-Bernoulli beam

    NASA Astrophysics Data System (ADS)

    Yang, J. B.; Jiang, L. J.; Chen, D. CH.

    2004-07-01

    Flexible motion of a uniform Euler-Bernoulli beam attached to a rotating rigid hub is investigated. Fully coupled non-linear integro-differential equations, describing axial, transverse and rotational motions of the beam, are derived by using the extended Hamilton's principle. The centrifugal stiffening effect is included in the derivation. A finite-dimensional model, including couplings of axial and transverse vibrations, and of elastic deformations and rigid motions, is obtained by the finite element method. By neglecting the axial motion, a simplified modelling, suitable for studying the transverse vibration and control of a beam with large angle and high-speed rotation, is presented. And suppressions of transverse vibrations of a rotating beam are simulated with the model by combining positive position feedback and momentum exchange feedback control laws. It is indicated that an improved performance for vibration control can be achieved with the method.

  4. Stationary spiral flow in polytropic stellar models

    PubMed Central

    Pekeris, C. L.

    1980-01-01

    It is shown that, in addition to the static Emden solution, a self-gravitating polytropic gas has a dynamic option in which there is stationary flow along spiral trajectories wound around the surfaces of concentric tori. The motion is obtained as a solution of a partial differential equation which is satisfied by the meridional stream function, coupled with Poisson's equation and a Bernoulli-type equation for the pressure (density). The pressure is affected by the whole of the Bernoulli term rather than by the centrifugal part only, which acts for a rotating model, and it may be reduced down to zero at the center. The spiral type of flow is illustrated for an incompressible fluid (n = 0), for which an exact solution is obtained. The features of the dynamic constant-density model are discussed as a basis for future comparison with the solution for compressible models. PMID:16592825

  5. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  6. A novel experimental setup to study the Hagen-Poiseuille and Bernoulli equations for a gas and determination of the viscosity of air

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Surajit

    2015-11-01

    We have performed an experiment in which we have determined the viscosity of air using the Hagen-Poiseuille equation in the proper range of the Reynolds number (Re). The experiment is novel and simple which students even at high school level can perform with minimal equipment.The experiment brings out the fact that determination of viscosity of a fluid is possible only when its Reynolds number is sufficiently small. At very large Reynolds number, the gas behaves more like an inviscid fluid and its flow rate satisfies Bernoulli’s equation. In the intermediate range of the Reynolds number, the flow rate satisfies neither the Hagen-Poiseuille equation nor the Bernoulli equation. A wide range of Reynolds numbers from 40 to about 5000 has been studied. In the case of air, this large range has not shown any sign of turbulence.

  7. Risk-adjusted monitoring of survival times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is moremore » efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.« less

  8. Stability analysis of internally damped rotating composite shafts using a finite element formulation

    NASA Astrophysics Data System (ADS)

    Ben Arab, Safa; Rodrigues, José Dias; Bouaziz, Slim; Haddar, Mohamed

    2018-04-01

    This paper deals with the stability analysis of internally damped rotating composite shafts. An Euler-Bernoulli shaft finite element formulation based on Equivalent Single Layer Theory (ESLT), including the hysteretic internal damping of composite material and transverse shear effects, is introduced and then used to evaluate the influence of various parameters: stacking sequences, fiber orientations and bearing properties on natural frequencies, critical speeds, and instability thresholds. The obtained results are compared with those available in the literature using different theories. The agreement in the obtained results show that the developed Euler-Bernoulli finite element based on ESLT including hysteretic internal damping and shear transverse effects can be effectively used for the stability analysis of internally damped rotating composite shafts. Furthermore, the results revealed that rotor stability is sensitive to the laminate parameters and to the properties of the bearings.

  9. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  10. Species distribution modelling for plant communities: Stacked single species or multivariate modelling approaches?

    Treesearch

    Emilie B. Henderson; Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Harold S.J. Zald

    2014-01-01

    Landscape management and conservation planning require maps of vegetation composition and structure over large regions. Species distribution models (SDMs) are often used for individual species, but projects mapping multiple species are rarer. We compare maps of plant community composition assembled by stacking results from many SDMs with multivariate maps constructed...

  11. Distributions of Characteristic Roots in Multivariate Analysis

    DTIC Science & Technology

    1976-07-01

    stiidied by various authors, have been briefly discussed. Such distributional ies of four test criteria and a few less important ones which are...functions h. -nots have further been discussed in view of the power comparisons made in co. ion wich tests of three multivariate hypotheses. In addition...one- sample case has also been considered in terms of distributional aspects of the ch. roots and criteria for tests of two hypotheses on the

  12. Echocardiographic estimation of systemic systolic blood pressure in dogs with mild mitral regurgitation.

    PubMed

    Tou, Sandra P; Adin, Darcy B; Estrada, Amara H

    2006-01-01

    Systemic hypertension is likely underdiagnosed in veterinary medicine because systemic blood pressure is rarely measured. Systemic blood pressure can theoretically be estimated by echocardiography. According to the modified Bernoulli equation (PG = 4v(2)), mitral regurgitation (MR) velocity should approximate systolic left ventricular pressure (sLVP), and therefore systolic systemic blood pressure (sSBP) in the presence of a normal left atrial pressure (LAP) and the absence of aortic stenosis. The aim of this study was to evaluate the use of echocardiography to estimate sSBP by means of the Bernoulli equation. Systemic blood pressure can be estimated by echocardiography. Seventeen dogs with mild MR. No dogs had aortic or subaortic stenosis, and all had MR with a clear continuous-wave Doppler signal and a left atrial to aorta ratio of < or = 1.6. Five simultaneous, blinded continuous-wave measurements of maximum MR velocity (Vmax) and indirect sSBP measurements (by Park's Doppler) were obtained for each dog. Pressure gradient was calculated from Vmax by means of the Bernoulli equation, averaged, and added to an assumed LAP of 8 mm Hg to calculate sLVP. Calculated sLVP was significantly correlated with indirectly measured sSBP within a range of 121 to 218 mm Hg (P = .0002, r = .78). Mean +/- SD bias was 0.1 +/- 15.3 mm Hg with limits of agreement of -29.9 to 30.1 mm Hg. Despite the significant correlation, the wide limits of agreement between the methods hinder the clinical utility of echocardiographic estimation of blood pressure.

  13. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  14. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  15. Modeling the diffusion of complex innovations as a process of opinion formation through social networks.

    PubMed

    Assenova, Valentina A

    2018-01-01

    Complex innovations- ideas, practices, and technologies that hold uncertain benefits for potential adopters-often vary in their ability to diffuse in different communities over time. To explain why, I develop a model of innovation adoption in which agents engage in naïve (DeGroot) learning about the value of an innovation within their social networks. Using simulations on Bernoulli random graphs, I examine how adoption varies with network properties and with the distribution of initial opinions and adoption thresholds. The results show that: (i) low-density and high-asymmetry networks produce polarization in influence to adopt an innovation over time, (ii) increasing network density and asymmetry promote adoption under a variety of opinion and threshold distributions, and (iii) the optimal levels of density and asymmetry in networks depend on the distribution of thresholds: networks with high density (>0.25) and high asymmetry (>0.50) are optimal for maximizing diffusion when adoption thresholds are right-skewed (i.e., barriers to adoption are low), but networks with low density (<0.01) and low asymmetry (<0.25) are optimal when thresholds are left-skewed. I draw on data from a diffusion field experiment to predict adoption over time and compare the results to observed outcomes.

  16. Robust multivariate nonparametric tests for detection of two-sample location shift in clinical trials

    PubMed Central

    Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo

    2018-01-01

    This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555

  17. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  18. Los Alamos NEP research in advanced plasma thrusters

    NASA Technical Reports Server (NTRS)

    Schoenberg, Kurt; Gerwin, Richard

    1991-01-01

    Research was initiated in advanced plasma thrusters that capitalizes on lab capabilities in plasma science and technology. The goal of the program was to examine the scaling issues of magnetoplasmadynamic (MPD) thruster performance in support of NASA's MPD thruster development program. The objective was to address multi-megawatt, large scale, quasi-steady state MPD thruster performance. Results to date include a new quasi-steady state operating regime which was obtained at space exploration initiative relevant power levels, that enables direct coaxial gun-MPD comparisons of thruster physics and performance. The radiative losses are neglible. Operation with an applied axial magnetic field shows the same operational stability and exhaust plume uniformity benefits seen in MPD thrusters. Observed gun impedance is in close agreement with the magnetic Bernoulli model predictions. Spatial and temporal measurements of magnetic field, electric field, plasma density, electron temperature, and ion/neutral energy distribution are underway. Model applications to advanced mission logistics are also underway.

  19. Bending analysis of agglomerated carbon nanotube-reinforced beam resting on two parameters modified Vlasov model foundation

    NASA Astrophysics Data System (ADS)

    Ghorbanpour Arani, A.; Zamani, M. H.

    2018-06-01

    The present work deals with bending behavior of nanocomposite beam resting on two parameters modified Vlasov model foundation (MVMF), with consideration of agglomeration and distribution of carbon nanotubes (CNTs) in beam matrix. Equivalent fiber based on Eshelby-Mori-Tanaka approach is employed to determine influence of CNTs aggregation on elastic properties of CNT-reinforced beam. The governing equations are deduced using the principle of minimum potential energy under assumption of the Euler-Bernoulli beam theory. The MVMF required the estimation of γ parameter; to this purpose, unique iterative technique based on variational principles is utilized to compute value of the γ and subsequently fourth-order differential equation is solved analytically. Eventually, the transverse displacements and bending stresses are obtained and compared for different agglomeration parameters, various boundary conditions simultaneously and variant elastic foundation without requirement to instate values for foundation parameters.

  20. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  1. Nonlinear finite amplitude torsional vibrations of cantilevers in viscous fluids

    NASA Astrophysics Data System (ADS)

    Aureli, Matteo; Pagano, Christopher; Porfiri, Maurizio

    2012-06-01

    In this paper, we study torsional vibrations of cantilever beams undergoing moderately large oscillations within a quiescent viscous fluid. The structure is modeled as an Euler-Bernoulli beam, with thin rectangular cross section, under base excitation. The distributed hydrodynamic loading experienced by the vibrating structure is described through a complex-valued hydrodynamic function which incorporates added mass and fluid damping elicited by moderately large rotations. We conduct a parametric study on the two dimensional computational fluid dynamics of a pitching rigid lamina, representative of a generic beam cross section, to investigate the dependence of the hydrodynamic function on the governing flow parameters. As the frequency and amplitude of the oscillation increase, vortex shedding and convection phenomena increase, thus resulting into nonlinear hydrodynamic damping. We derive a handleable nonlinear correction to the classical hydrodynamic function developed for small amplitude torsional vibrations for use in a reduced order nonlinear modal model and we validate theoretical results against experimental findings.

  2. Quasi-stationary mechanics of elastic continua with bending stiffness wrapping on a pulley system

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, S.; Mirhadizadeh, S.

    2016-05-01

    In many engineering applications elastic continua such as ropes and belts often are subject to bending when they pass over pulleys / sheaves. In this paper the quasi-stationary mechanics of a cable-pulley system is studied. The cable is modelled as a moving Euler- Bernoulli beam. The distribution of tension is non-uniform along its span and due to the bending stiffness the contact points at the pulley-beam boundaries are not unknown. The system is described by a set of nonlinear ordinary differential equations with undetermined boundary conditions. The resulting nonlinear Boundary Value Problem (BVP) with unknown boundaries is solved by converting the problem into the ‘standard’ form defined over a fixed interval. Numerical results obtained for a range of typical configurations with relevant boundary conditions applied demonstrate that due to the effects of bending stiffness the angels of wrap are reduced and the span tensions are increased.

  3. Physical Watermarking for Securing Cyber-Physical Systems via Packet Drop Injections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozel, Omur; Weekrakkody, Sean; Sinopoli, Bruno

    Physical watermarking is a well known solution for detecting integrity attacks on Cyber-Physical Systems (CPSs) such as the smart grid. Here, a random control input is injected into the system in order to authenticate physical dynamics and sensors which may have been corrupted by adversaries. Packet drops may naturally occur in a CPS due to network imperfections. To our knowledge, previous work has not considered the role of packet drops in detecting integrity attacks. In this paper, we investigate the merit of injecting Bernoulli packet drops into the control inputs sent to actuators as a new physical watermarking scheme. Withmore » the classical linear quadratic objective function and an independent and identically distributed packet drop injection sequence, we study the effect of packet drops on meeting security and control objectives. Our results indicate that the packet drops could act as a potential physical watermark for attack detection in CPSs.« less

  4. Apparatus for Teaching Physics.

    ERIC Educational Resources Information Center

    Minnix, Richard B.; Carpenter, D. Rae, Jr., Eds.

    1982-01-01

    Thirteen demonstrations using a capacitor-start induction motor fitted with an aluminum disk are described. Demonstrations illustrate principles from mechanics, fluids (Bernoulli's principle), waves (chladni patterns and doppler effect), magnetism, electricity, and light (mechanical color mixing). In addition, the instrument can measure friction…

  5. B(H) has a pure state that is not multiplicative on any masa.

    PubMed

    Akemann, Charles; Weaver, Nik

    2008-04-08

    Assuming the continuum hypothesis, we prove that Bernoulli function(H) has a pure state whose restriction to any masa is not pure. This resolves negatively old conjectures of Kadison and Singer and of Anderson.

  6. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    PubMed

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  7. Alternative Proofs for Inequalities of Some Trigonometric Functions

    ERIC Educational Resources Information Center

    Guo, Bai-Ni; Qi, Feng

    2008-01-01

    By using an identity relating to Bernoulli's numbers and power series expansions of cotangent function and logarithms of functions involving sine function, cosine function and tangent function, four inequalities involving cotangent function, sine function, secant function and tangent function are established.

  8. A new subgrid-scale representation of hydrometeor fields using a multivariate PDF

    DOE PAGES

    Griffin, Brian M.; Larson, Vincent E.

    2016-06-03

    The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. In conclusion, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less

  9. Model Reduction in Biomechanics

    NASA Astrophysics Data System (ADS)

    Feng, Yan

    The mechanical characteristic of the cell is primarily performed by the cytoskeleton. Microtubules, actin, and intermediate filaments are the three main cytoskeletal polymers. Of these, microtubules are the stiffest and have multiple functions within a cell that include: providing tracks for intracellular transport, transmitting the mechanical force necessary for cell division during mitosis, and providing sufficient stiffness for propulsion in flagella and cilia. Microtubule mechanics has been studied by a variety of methods: detailed molecular dynamics (MD), coarse-grained models, engineering type models, and elastic continuum models. In principle, atomistic MD simulations should be able to predict all desired mechanical properties of a single molecule, however, in practice the large computational resources are required to carry out a simulation of larger biomolecular system. Due to the limited accessibility using even the most ambitious all-atom models and the demand for the multiscale molecular modeling and simulation, the emergence of the reduced models is critically important to provide the capability for investigating the biomolecular dynamics that are critical to many biological processes. Then the coarse-grained models, such as elastic network models and anisotropic network models, have been shown to bequite accurate in predicting microtubule mechanical response, but still requires significant computational resources. On the other hand, the microtubule is treated as comprising materials with certain continuum material properties. Such continuum models, especially Euler-Bernoulli beam models, are often used to extract mechanical parameters from experimental results. The microtubule is treated as comprising materials with certain continuum material properties. Such continuum models, especially Euler-Bernoulli beam models in which the biomolecular system is assumed as homogeneous isotropic materials with solid cross-sections, are often used to extract mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.

  10. Ergodicity of two hard balls in integrable polygons

    NASA Astrophysics Data System (ADS)

    Bálint, Péter; Troubetzkoy, Serge

    2004-11-01

    We prove the hyperbolicity, ergodicity and thus the Bernoulli property of two hard balls in one of the following four polygons: the square, the equilateral triangle, the 45°-45°-90° triangle or the 30°-60°-90° triangle.

  11. Noninvasive estimation of transmitral pressure drop across the normal mitral valve in humans: importance of convective and inertial forces during left ventricular filling

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Vandervoort, P. M.; Greenberg, N. L.; Smedira, N. G.; McCarthy, P. M.; Garcia, M. J.; Thomas, J. D.

    2000-01-01

    OBJECTIVES: We hypothesized that color M-mode (CMM) images could be used to solve the Euler equation, yielding regional pressure gradients along the scanline, which could then be integrated to yield the unsteady Bernoulli equation and estimate noninvasively both the convective and inertial components of the transmitral pressure difference. BACKGROUND: Pulsed and continuous wave Doppler velocity measurements are routinely used clinically to assess severity of stenotic and regurgitant valves. However, only the convective component of the pressure gradient is measured, thereby neglecting the contribution of inertial forces, which may be significant, particularly for nonstenotic valves. Color M-mode provides a spatiotemporal representation of flow across the mitral valve. METHODS: In eight patients undergoing coronary artery bypass grafting, high-fidelity left atrial and ventricular pressure measurements were obtained synchronously with transmitral CMM digital recordings. The instantaneous diastolic transmitral pressure difference was computed from the M-mode spatiotemporal velocity distribution using the unsteady flow form of the Bernoulli equation and was compared to the catheter measurements. RESULTS: From 56 beats in 16 hemodynamic stages, inclusion of the inertial term ([deltapI]max = 1.78+/-1.30 mm Hg) in the noninvasive pressure difference calculation significantly increased the temporal correlation with catheter-based measurement (r = 0.35+/-0.24 vs. 0.81+/-0.15, p< 0.0001). It also allowed an accurate approximation of the peak pressure difference ([deltapc+I]max = 0.95 [delta(p)cathh]max + 0.24, r = 0.96, p<0.001, error = 0.08+/-0.54 mm Hg). CONCLUSIONS: Inertial forces are significant components of the maximal pressure drop across the normal mitral valve. These can be accurately estimated noninvasively using CMM recordings of transmitral flow, which should improve the understanding of diastolic filling and function of the heart.

  12. Biomechanics of hair cell kinocilia: experimental measurement of kinocilium shaft stiffness and base rotational stiffness with Euler–Bernoulli and Timoshenko beam analysis

    PubMed Central

    Spoon, Corrie; Grant, Wally

    2011-01-01

    Vestibular hair cell bundles in the inner ear contain a single kinocilium composed of a 9+2 microtubule structure. Kinocilia play a crucial role in transmitting movement of the overlying mass, otoconial membrane or cupula to the mechanotransducing portion of the hair cell bundle. Little is known regarding the mechanical deformation properties of the kinocilium. Using a force-deflection technique, we measured two important mechanical properties of kinocilia in the utricle of a turtle, Trachemys (Pseudemys) scripta elegans. First, we measured the stiffness of kinocilia with different heights. These kinocilia were assumed to be homogenous cylindrical rods and were modeled as both isotropic Euler–Bernoulli beams and transversely isotropic Timoshenko beams. Two mechanical properties of the kinocilia were derived from the beam analysis: flexural rigidity (EI) and shear rigidity (kGA). The Timoshenko model produced a better fit to the experimental data, predicting EI=10,400 pN μm2 and kGA=247 pN. Assuming a homogenous rod, the shear modulus (G=1.9 kPa) was four orders of magnitude less than Young's modulus (E=14.1 MPa), indicating that significant shear deformation occurs within deflected kinocilia. When analyzed as an Euler–Bernoulli beam, which neglects translational shear, EI increased linearly with kinocilium height, giving underestimates of EI for shorter kinocilia. Second, we measured the rotational stiffness of the kinocilium insertion (κ) into the hair cell's apical surface. Following BAPTA treatment to break the kinocilial links, the kinocilia remained upright, and κ was measured as 177±47 pN μm rad–1. The mechanical parameters we quantified are important for understanding how forces arising from head movement are transduced and encoded by hair cells. PMID:21307074

  13. Localization of quantum Bernoulli noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Caishi; Zhang, Jihong

    2013-10-15

    The family (∂{sub k},∂{sub k}{sup *}){sub k≥0} of annihilation and creation operators acting on square integrable functionals of a Bernoulli process Z= (Z{sub k}){sub k⩾0} can be interpreted as quantum Bernoulli noises. In this note we consider the operator family (ℓ{sub k},ℓ{sub k}{sup *}){sub k≥0}, where ℓ{sub k}=∂{sub k}E{sub k} with E{sub k} being the conditional expectation (operator) given σ-field σ(Z{sub j}; 0 ⩽j⩽k). We show that ℓ{sub k} (resp. ℓ{sub k}{sup *}) is essentially a kind of localization of the annihilation operator ∂{sub k} (resp. creation operator ∂{sub k}{sup *}). We examine properties of the family (ℓ{sub k},ℓ{sub k}{supmore » *}){sub k≥0} and prove, among other things, that ℓ{sub k} and ℓ{sub k}{sup *} satisfy a local canonical anti-communication relation and (ℓ{sub k}{sup *}){sub k≥0} forms a mutually orthogonal operator sequence although each ℓ{sub k} is not a projection operator. We find that the operator series Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}Xℓ{sub k} converges in the strong operator topology for each bounded operator X acting on square integrable functionals of Z. In particular we get an explicit sum of the operator series Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}ℓ{sub k}. A useful norm estimate on Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}Xℓ{sub k} is also obtained. Finally we show applications of our main results to quantum dynamical semigroups and quantum probability.« less

  14. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    PubMed

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  15. Flutter of wings involving a locally distributed flexible control surface

    NASA Astrophysics Data System (ADS)

    Mozaffari-Jovin, S.; Firouz-Abadi, R. D.; Roshanian, J.

    2015-11-01

    This paper undertakes to facilitate appraisal of aeroelastic interaction of a locally distributed, flap-type control surface with aircraft wings operating in a subsonic potential flow field. The extended Hamilton's principle serves as a framework to ascertain the Euler-Lagrange equations for coupled bending-torsional-flap vibration. An analytical solution to this boundary-value problem is then accomplished by assumed modes and the extended Galerkin's method. The developed aeroelastic model considers both the inherent flexibility of the control surface displaced on the wing and the inertial coupling between these two flexible bodies. The structural deformations also obey the Euler-Bernoulli beam theory, along with the Kelvin-Voigt viscoelastic constitutive law. Meanwhile, the unsteady thin-airfoil and strip theories are the tools of producing the three-dimensional airloads. The origin of aerodynamic instability undergoes analysis in light of the oscillatory loads as well as the loads owing to arbitrary motions. After successful verification of the model, a systematic flutter survey was conducted on the theoretical effects of various control surface parameters. The results obtained demonstrate that the flapping modes and parameters of the control surface can significantly impact the flutter characteristics of the wings, which leads to a series of pertinent conclusions.

  16. Idea Bank.

    ERIC Educational Resources Information Center

    Science Teacher, 1989

    1989-01-01

    Describes classroom activities and models for migration, mutation, and isolation; a diffusion model; Bernoulli's principle; sound in a vacuum; time regression mystery of DNA; seating chart lesson plan; algae mystery laboratory; water as mass; science fair; flipped book; making a cloud; wet mount slide; timer adaptation; thread slide model; and…

  17. Characterizing information propagation through inter-vehicle communication on a simple network of two parallel roads

    DOT National Transportation Integrated Search

    2010-10-01

    In this report, we study information propagation via inter-vehicle communication along two parallel : roads. By identifying an inherent Bernoulli process, we are able to derive the mean and variance of : propagation distance. A road separation distan...

  18. Origins of astronautics in Switzerland

    NASA Technical Reports Server (NTRS)

    Wadlis, A.

    1977-01-01

    Swiss contributions to astronautics are recounted. Scientists mentioned include: Bernoulli and Euler for their early theoretical contributions; the balloonist, Auguste Piccard; J. Ackeret, for his contributions to the study of aerodynamics; the rocket propulsion pioneer, Josef Stemmer; and the Swiss space scientists, Eugster, Stettbacker, Zwicky, and Schurch.

  19. Fun with Physics.

    ERIC Educational Resources Information Center

    McGrath, Susan

    This book shows how physics relates to daily life. Chapters included are: (1) "Physics of Fun" (dealing with the concepts of friction, Bernoulli's principle, lift, buoyancy, adhesion, cohesion, surface tension, gas expansion, waves, light, mirror images, and solar cells); (2) "Physics of Nature" (illustrating the concepts of inertia, static…

  20. Learning Physics in a Water Park

    ERIC Educational Resources Information Center

    Cabeza, Cecilia; Rubido, Nicolás; Martí, Arturo C.

    2014-01-01

    Entertaining and educational experiments that can be conducted in a water park, illustrating physics concepts, principles and fundamental laws, are described. These experiments are suitable for students ranging from senior secondary school to junior university level. Newton's laws of motion, Bernoulli's equation, based on the conservation of…

  1. Towards Explaining the Water Siphon

    ERIC Educational Resources Information Center

    Jumper, William D.; Stanchev, Boris

    2014-01-01

    Many high school and introductory college physics courses cover topics in fluidics through the Bernoulli and Poiseuille equations, and consequently one might think that siphons should present an excellent opportunity to engage students in various laboratory measurement exercises incorporating these fascinating devices. However, the flow rates (or…

  2. Glottal flow through a two-mass model: comparison of Navier-Stokes solutions with simplified models.

    PubMed

    de Vries, M P; Schutte, H K; Veldman, A E P; Verkerke, G J

    2002-04-01

    A new numerical model of the vocal folds is presented based on the well-known two-mass models of the vocal folds. The two-mass model is coupled to a model of glottal airflow based on the incompressible Navier-Stokes equations. Glottal waves are produced using different initial glottal gaps and different subglottal pressures. Fundamental frequency, glottal peak flow, and closed phase of the glottal waves have been compared with values known from the literature. The phonation threshold pressure was determined for different initial glottal gaps. The phonation threshold pressure obtained using the flow model with Navier-Stokes equations corresponds better to values determined in normal phonation than the phonation threshold pressure obtained using the flow model based on the Bernoulli equation. Using the Navier-Stokes equations, an increase of the subglottal pressure causes the fundamental frequency and the glottal peak flow to increase, whereas the fundamental frequency in the Bernoulli-based model does not change with increasing pressure.

  3. Nonequilibrium Transport and the Bernoulli Effect of Electrons in a Two-Dimensional Electron Gas

    NASA Astrophysics Data System (ADS)

    Kaya, Ismet I.

    2013-02-01

    Nonequilibrium transport of charged carriers in a two-dimensional electron gas is summarized from an experimental point of view. The transport regime in which the electron-electron interactions are enhanced at high bias leads to a range of striking effects in a two-dimensional electron gas. This regime of transport is quite different than the ballistic transport in which particles propagate coherently with no intercarrier energy transfer and the diffusive transport in which the momentum of the electron system is lost with the involvement of the phonons. Quite a few hydrodynamic phenomena observed in classical gasses have the electrical analogs in the current flow. When intercarrier scattering events dominate the transport, the momentum sharing via narrow angle scattering among the hot and cold electrons lead to negative resistance and electron pumping which can be viewed as the analog of the Bernoulli-Venturi effect observed classical gasses. The recent experimental findings and the background work in the field are reviewed.

  4. Long-term stable time integration scheme for dynamic analysis of planar geometrically exact Timoshenko beams

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed

    2017-05-01

    In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.

  5. Three dimensional steady subsonic Euler flows in bounded nozzles

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Xie, Chunjing

    The existence and uniqueness of three dimensional steady subsonic Euler flows in rectangular nozzles were obtained when prescribing normal component of momentum at both the entrance and exit. If, in addition, the normal component of the voriticity and the variation of Bernoulli's function at the entrance are both zero, then there exists a unique subsonic potential flow when the magnitude of the normal component of the momentum is less than a critical number. As the magnitude of the normal component of the momentum approaches the critical number, the associated flows converge to a subsonic-sonic flow. Furthermore, when the normal component of vorticity and the variation of Bernoulli function are both small, the existence and uniqueness of subsonic Euler flows with non-zero vorticity are established. The proof of these results is based on a new formulation for the Euler system, a priori estimate for nonlinear elliptic equations with nonlinear boundary conditions, detailed study for a linear div-curl system, and delicate estimate for the transport equations.

  6. Multi stage unreliable retrial Queueing system with Bernoulli vacation

    NASA Astrophysics Data System (ADS)

    Radha, J.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this work we considered the Bernoulli vacation in group arrival retrial queues with unreliable server. Here, a server providing service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th, (i=1,2,…k) stage of service, the customer may go to (i+1)th stage with probability θi , or leave the system with probability qi = 1 - θi , (i = 1,2,…k - 1) and qi = 1, (i = k). The server may enjoy vacation (orbit is empty or not) with probability v after finishing the service or continuing the service with probability 1-v. After finishing the vacation, the server search for the customer in the orbit with probability θ or remains idle for new arrival with probability 1-θ. We analyzed the system using the method of supplementary variable.

  7. Control volume analyses of glottal flow using a fully-coupled numerical fluid-structure interaction model

    NASA Astrophysics Data System (ADS)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2013-11-01

    Vocal fold vibrations and the glottal jet are successfully simulated using the modified Immersed Finite Element method (mIFEM), a fully coupled dynamics approach to model fluid-structure interactions. A self-sustained and steady vocal fold vibration is captured given a constant pressure input at the glottal entrance. The flow rates at different axial locations in the glottis are calculated, showing small variations among them due to the vocal fold motion and deformation. To further facilitate the understanding of the phonation process, two control volume analyses, specifically with Bernoulli's equation and Newton's 2nd law, are carried out for the glottal flow based on the simulation results. A generalized Bernoulli's equation is derived to interpret the correlations between the velocity and pressure temporally and spatially along the center line which is a streamline using a half-space model with symmetry boundary condition. A specialized Newton's 2nd law equation is developed and divided into terms to help understand the driving mechanism of the glottal flow.

  8. The behavior of a liquid drop levitated and drastically flattened by an intense sound field

    NASA Technical Reports Server (NTRS)

    Lee, C. P.; Anilkumar, A. V.; Wang, Taylor G.

    1992-01-01

    The deformation and break-up are studied of a liquid drop in levitation through the radiation pressure. Using high-speed photography ripples are observed on the central membrane of the drop, atomization of the membrane by emission of satellite drops from its unstable ripples, and shattering of the drop after upward buckling like an umbrella, or after horizontal expansion like a sheet. These effects are captured on video. The ripples are theorized to be capillary waves generated by the Faraday instability excited by the sound vibration. Atomization occurs whenever the membrane becomes so thin that the vibration is sufficiently intense. The vibration leads to a destabilizing Bernoulli correction in the static pressure. Buckling occurs when an existent equilibrium is unstable to a radial (i.e., tangential) motion of the membrane because of the Bernoulli effect. Besides, the radiation stress at the rim of the drop is a suction stress which can make equilibrium impossible, leading to the horizontal expansion and the subsequent break-up.

  9. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  10. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  11. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  12. The Priority Heuristic: Making Choices without Trade-Offs

    ERIC Educational Resources Information Center

    Brandstatter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph

    2006-01-01

    Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic…

  13. Apparatus for Teaching Physics

    ERIC Educational Resources Information Center

    Gottlieb, Herbert H.

    1977-01-01

    Describes: how to measure index of refraction by the thickness method; how to teach the concept of torque using a torque wrench; how to produce a real image with a concave mirror; how to eliminate the interface effects of a Pyrex containers; and an apparatus to illustrate Bernoulli's Principle. (MLH)

  14. Understanding Wing Lift

    ERIC Educational Resources Information Center

    Silva, J.; Soares, A. A.

    2010-01-01

    The conventional explanation of aerodynamic lift based on Bernoulli's equation is one of the most common mistakes in presentations to school students and is found in children's science books. The fallacies in this explanation together with an alternative explanation for aerofoil lift have already been presented in an excellent article by Babinsky…

  15. In-plane vibration of FG micro/nano-mass sensor based on nonlocal theory under various thermal loading via differential transformation method

    NASA Astrophysics Data System (ADS)

    Rahmani, O.; Mohammadi Niaei, A.; Hosseini, S. A. H.; Shojaei, M.

    2017-01-01

    In the present study, free vibration model of a cantilever functionally graded (FG) nanobeam with an attached mass at tip and under various thermal loading and two types of material distribution is introduced. The vibration performance is considered using nonlocal Euler-Bernoulli beam theory. Two types of thermal loading, namely, uniform and nonlinear temperature rises through the thickness direction are considered. Thermo-mechanical properties of FG nano mass sensor are supposed to vary smoothly and continuously throughout the thickness based on power-law and Mori Tanaka distributions of material properties. Eringen non-local elasticity theory is exploited to describe the size dependency of FG nanobeam. The governing equations of the system with both axial and transverse displacements are derived based on Hamilton's principle and solved utilizing the differential transformation method (DTM) to find the non-dimensional natural frequencies. The results have good agreements with those discussing in the literature. After validation of the present model, the effect of various parameters such as mass and position of the attached nano particle, FG power-law exponent, thermal load type, material distribution type and nonlocal parameter on the frequency of nano sensor are studied. It is shown that the present model produces results of high accuracy, and it can be used as a benchmark in future studies of the free vibration of FG Nano-Mass Sensors.

  16. Measuring firm size distribution with semi-nonparametric densities

    NASA Astrophysics Data System (ADS)

    Cortés, Lina M.; Mora-Valencia, Andrés; Perote, Javier

    2017-11-01

    In this article, we propose a new methodology based on a (log) semi-nonparametric (log-SNP) distribution that nests the lognormal and enables better fits in the upper tail of the distribution through the introduction of new parameters. We test the performance of the lognormal and log-SNP distributions capturing firm size, measured through a sample of US firms in 2004-2015. Taking different levels of aggregation by type of economic activity, our study shows that the log-SNP provides a better fit of the firm size distribution. We also formally introduce the multivariate log-SNP distribution, which encompasses the multivariate lognormal, to analyze the estimation of the joint distribution of the value of the firm's assets and sales. The results suggest that sales are a better firm size measure, as indicated by other studies in the literature.

  17. Baseline Experiments on Coulomb Damping due to Rotational Slip

    DTIC Science & Technology

    1992-12-01

    by Griffe121 . As expected Equation (2-39) matches the result given by Griffel . 2.2.2. Euler-Bernoulli Beam versus Timeshenko Beam. Omitted from Euler...McGraw-Hill, Inc., 1983. 20. Clark, S. K., Dynamics of Continuous Elements, New Jersey, Prentice-Hall, Inc., 1972. 21. Griffel , W., Beam Formulas

  18. Asteroid Lightcurve Analysis at Elephant Head Observatory: 2012 November - 2013 April

    NASA Astrophysics Data System (ADS)

    Alkema, Michael S.

    2013-07-01

    Thirteen asteroids were observed from Elephant Head Observatory from 2012 November to 2013 April: the main-belt asteroids 227 Philosophia, 331 Etheridgea, 577 Rhea, 644 Cosima, 850 Altona, 906 Repsolda, 964 Subamara, 973 Aralia, 1016 Anitra, 1024 Hale, 2034 Bernoulli, 2556 Louise, and Jupiter Trojan 3063 Makhaon.

  19. Eradicating a Disease: Lessons from Mathematical Epidemiology

    ERIC Educational Resources Information Center

    Glomski, Matthew; Ohanian, Edward

    2012-01-01

    Smallpox remains the only human disease ever eradicated. In this paper, we consider the mathematics behind control strategies used in the effort to eradicate smallpox, from the life tables of Daniel Bernoulli, to the more modern susceptible-infected-removed (SIR)-type compartmental models. In addition, we examine the mathematical feasibility of…

  20. When Science Soars.

    ERIC Educational Resources Information Center

    Baird, Kate A.; And Others

    1997-01-01

    Describes an inquiry-based activity involving paper airplanes that has been used as a preservice training tool for instructors of a Native American summer science camp, and as an activity for demonstrating inquiry-based methods in a secondary science methods course. Focuses on Bernoulli's principle which describes how fluids move over and around…

  1. Filling or Draining a Water Bottle with Two Holes

    ERIC Educational Resources Information Center

    Cross, Rod

    2016-01-01

    Three simple experiments are described using a small water bottle with two holes in the side of the bottle. The main challenge is to predict and then explain the observations, but the arrangements can also be used for quantitative measurements concerning hydrostatic pressure, Bernoulli's equation, surface tension and bubble formation.

  2. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1979-01-01

    Describes the following: a low-pressure sodium light source; a design of hot cathodes for plasma and electron physics experiments; a demonstration cart for a physics of sound course; Bernoulli force using coffee cups; a spark recording for the linear air track; and a demonstration of the effect of altering the cavity resonance of a violin. (GA)

  3. Capillary waves in the subcritical nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozyreff, G.

    2010-01-15

    We expand recent results on the nonlinear Schroedinger equation with cubic-quintic nonlinearity to show that some solutions are described by the Bernoulli equation in the presence of surface tension. As a consequence, capillary waves are predicted and found numerically at the interface between regions of large and low amplitude.

  4. Structural Influence of Dynamics of Bottom Loads

    DTIC Science & Technology

    2014-02-10

    using the Numerette research craft, are underway. Early analytic research on slamming was done by von Karman [5] using a momentum approach, and by...pressure q{x,t) as two constant pressures, qi and qj, traveling at a constant speed c. Using the Euler- Bernoulli beam assumptions the governing

  5. Nonlinear Acoustic Metamaterials for Sound Attenuation Applications

    DTIC Science & Technology

    2011-03-16

    elastic guides, which are discretized into Bernoulli -Euler beam elements [29]. We first describe the equations of particles’ motion in the DE model...to 613 N in the curved one [see Fig. 15(b)]. Overall, the area under the force-time curve, which corresponds to the amount of momentum transferred

  6. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  7. Hermann-Bernoulli-Laplace-Hamilton-Runge-Lenz Vector.

    ERIC Educational Resources Information Center

    Subramanian, P. R.; And Others

    1991-01-01

    A way for students to refresh and use their knowledge in both mathematics and physics is presented. By the study of the properties of the "Runge-Lenz" vector the subjects of algebra, analytical geometry, calculus, classical mechanics, differential equations, matrices, quantum mechanics, trigonometry, and vector analysis can be reviewed. (KR)

  8. The Cheapbook: A Compendium of Inexpensive Exhibit Ideas, 1995 Edition.

    ERIC Educational Resources Information Center

    Orselli, Paul, Ed.

    This guide includes complete installation descriptions of 30 exhibits. They include: the adjustable birthday cake, ball-in-tube, Bernoulli Box, chain wave, collapsible truss bridge, double wave device, eddy currents raceway, full-length mirror, geodesic domes, giant magnetic tangrams, harmonic cantilever, hyperboloid of revolution, lifting lever,…

  9. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  10. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  11. Measures of dependence for multivariate Lévy distributions

    NASA Astrophysics Data System (ADS)

    Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.

    2001-02-01

    Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.

  12. Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data.

    PubMed

    Carroll, Rachel; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Aregay, Mehreteab; Watjou, Kevin

    2017-05-09

    Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.

  13. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  14. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  15. Heterogeneity Coefficients for Mahalanobis' D as a Multivariate Effect Size.

    PubMed

    Del Giudice, Marco

    2017-01-01

    The Mahalanobis distance D is the multivariate generalization of Cohen's d and can be used as a standardized effect size for multivariate differences between groups. An important issue in the interpretation of D is heterogeneity, that is, the extent to which contributions to the overall effect size are concentrated in a small subset of variables rather than evenly distributed across the whole set. Here I present two heterogeneity coefficients for D based on the Gini coefficient, a well-known index of inequality among values of a distribution. I discuss the properties and limitations of the two coefficients and illustrate their use by reanalyzing some published findings from studies of gender differences.

  16. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  17. Inverse design of centrifugal compressor vaned diffusers in inlet shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zangeneh, M.

    1996-04-01

    A three-dimensional inverse design method in which the blade (or vane) geometry is designed for specified distributions of circulation and blade thickness is applied to the design of centrifugal compressor vaned diffusers. Two generic diffusers are designed, one with uniform inlet flow (equivalent to a conventional design) and the other with a sheared inlet flow. The inlet shear flow effects are modeled in the design method by using the so-called ``Secondary Flow Approximation`` in which the Bernoulli surfaces are convected by the tangentially mean inviscid flow field. The difference between the vane geometry of the uniform inlet flow and nonuniformmore » inlet flow diffusers is found to be most significant from 50 percent chord to the trailing edge region. The flows through both diffusers are computed by using Denton`s three-dimensional inviscid Euler solver and Dawes` three-dimensional Navier-Stokes solver under sheared in-flow conditions. The predictions indicate improved pressure recovery and internal flow field for the diffuser designed for shear inlet flow conditions.« less

  18. Stability analysis for discrete-time stochastic memristive neural networks with both leakage and probabilistic delays.

    PubMed

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E

    2018-06-01

    This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  20. Hydroelastic analysis of ice shelves under long wave excitation

    NASA Astrophysics Data System (ADS)

    Papathanasiou, T. K.; Karperaki, A. E.; Theotokoglou, E. E.; Belibassakis, K. A.

    2015-05-01

    The transient hydroelastic response of an ice shelf under long wave excitation is analysed by means of the finite element method. The simple model, presented in this work, is used for the simulation of the generated kinematic and stress fields in an ice shelf, when the latter interacts with a tsunami wave. The ice shelf, being of large length compared to its thickness, is modelled as an elastic Euler-Bernoulli beam, constrained at the grounding line. The hydrodynamic field is represented by the linearised shallow water equations. The numerical solution is based on the development of a special hydroelastic finite element for the system of governing of equations. Motivated by the 2011 Sulzberger Ice Shelf (SIS) calving event and its correlation with the Honshu Tsunami, the SIS stable configuration is studied. The extreme values of the bending moment distribution in both space and time are examined. Finally, the location of these extrema is investigated for different values of ice shelf thickness and tsunami wave length.

  1. Hydroelastic analysis of ice shelves under long wave excitation

    NASA Astrophysics Data System (ADS)

    Papathanasiou, T. K.; Karperaki, A. E.; Theotokoglou, E. E.; Belibassakis, K. A.

    2015-08-01

    The transient hydroelastic response of an ice shelf under long wave excitation is analysed by means of the finite element method. The simple model, presented in this work, is used for the simulation of the generated kinematic and stress fields in an ice shelf, when the latter interacts with a tsunami wave. The ice shelf, being of large length compared to its thickness, is modelled as an elastic Euler-Bernoulli beam, constrained at the grounding line. The hydrodynamic field is represented by the linearised shallow water equations. The numerical solution is based on the development of a special hydroelastic finite element for the system of governing of equations. Motivated by the 2011 Sulzberger Ice Shelf (SIS) calving event and its correlation with the Honshu Tsunami, the SIS stable configuration is studied. The extreme values of the bending moment distribution in both space and time are examined. Finally, the location of these extrema is investigated for different values of ice shelf thickness and tsunami wave length.

  2. Evaluation of the microscopic distribution of florfenicol in feed pellets for salmon by Fourier Transform infrared imaging and multivariate analysis.

    PubMed

    Bastidas, Camila Y; von Plessing, Carlos; Troncoso, José; Del P Castillo, Rosario

    2018-04-15

    Fourier Transform infrared imaging and multivariate analysis were used to identify, at the microscopic level, the presence of florfenicol (FF), a heavily-used antibiotic in the salmon industry, supplied to fishes in feed pellets for the treatment of salmonid rickettsial septicemia (SRS). The FF distribution was evaluated using Principal Component Analysis (PCA) and Augmented Multivariate Curve Resolution with Alternating Least Squares (augmented MCR-ALS) on the spectra obtained from images with pixel sizes of 6.25 μm × 6.25 μm and 1.56 μm × 1.56 μm, in different zones of feed pellets. Since the concentration of the drug was 3.44 mg FF/g pellet, this is the first report showing the powerful ability of the used of spectroscopic techniques and multivariate analysis, especially the augmented MCR-ALS, to describe the FF distribution in both the surface and inner parts of feed pellets at low concentration, in a complex matrix and at the microscopic level. The results allow monitoring the incorporation of the drug into the feed pellets. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Translational Bounds for Factorial n and the Factorial Polynomial

    ERIC Educational Resources Information Center

    Mahmood, Munir; Edwards, Phillip

    2009-01-01

    During the period 1729-1826 Bernoulli, Euler, Goldbach and Legendre developed expressions for defining and evaluating "n"! and the related gamma function. Expressions related to "n"! and the gamma function are a common feature in computer science and engineering applications. In the modern computer age people live in now, two common tests to…

  4. Beckham as Physicist?

    ERIC Educational Resources Information Center

    Ireson, Gren

    2001-01-01

    If football captures the interest of students, it can be used to teach physics. In this case, a Beckham free-kick can be used to introduce concepts such as drag, the Bernoulli principle, Reynolds number, and the Magnus effect by asking the simple question: How does he curve the ball so much? Introduces basic mechanics along the way. (Author/ASK)

  5. Multi-Object Filtering for Space Situational Awareness

    DTIC Science & Technology

    2014-06-01

    labelling such as the labelled multi- Bernoulli filter [27]. 3.2 Filter derivation: key modelling assumptions Ouf of the general filtering framework [14...radiation pressure in the canon- ball model has been taken into account, leading to the following acceleration: arad = −Fp · C A m E c AEarth |r− rSun| esatSun

  6. The Demise of Decision Making: How Information Superiority Degrades Our Ability to Make Decisions

    DTIC Science & Technology

    2013-05-20

    studied the topic of risk in relation to decision making. In fact, Daniel Bernoulli produced findings in 1738 connecting risk aversion to wealth and...determined that they were stalled for some reason and not fighting. 34 Angry of this unplanned halt and potential loss of momentum , Franks sought answers

  7. Half Empty or Half Full?

    ERIC Educational Resources Information Center

    Rohr, Tyler; Rohr, Jim

    2015-01-01

    Previously appearing in this journal were photographs of a physics apparatus, developed circa 1880, that was believed to be used to demonstrate the "Bernoulli effect." Drawings of these photographs appear here and show that when there is no flow, the water level h[subscript PT2] in the piezometer tube at location (2) is at the same level…

  8. A Survival Model for Shortleaf Pine Tress Growing in Uneven-Aged Stands

    Treesearch

    Thomas B. Lynch; Lawrence R. Gering; Michael M. Huebschmann; Paul A. Murphy

    1999-01-01

    A survival model for shortleaf pine (Pinus echinata Mill.) trees growing in uneven-aged stands was developed using data from permanently established plots maintained by an industrial forestry company in western Arkansas. Parameters were fitted to a logistic regression model with a Bernoulli dependent variable in which "0" represented...

  9. Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge

    NASA Astrophysics Data System (ADS)

    Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam

    2018-04-01

    The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).

  10. Acoustic Attraction

    NASA Astrophysics Data System (ADS)

    Oviatt, Eric; Patsiaouris, Konstantinos; Denardo, Bruce

    2009-11-01

    A sound source of finite size produces a diverging traveling wave in an unbounded fluid. A rigid body that is small compared to the wavelength experiences an attractive radiation force (toward the source). An attractive force is also exerted on the fluid itself. The effect can be demonstrated with a styrofoam ball suspended near a loudspeaker that is producing sound of high amplitude and low frequency (for example, 100 Hz). The behavior can be understood and roughly calculated as a time-averaged Bernoulli effect. A rigorous scattering calculation yields a radiation force that is within a factor of two of the Bernoulli result. For a spherical wave, the force decreases as the inverse fifth power of the distance from the source. Applications of the phenomenon include ultrasonic filtration of liquids and the growth of supermassive black holes that emit sound waves in a surrounding plasma. An experiment is being conducted in an anechoic chamber with a 1-inch diameter aluminum ball that is suspended from an analytical balance. Directly below the ball is a baffled loudspeaker that exerts an attractive force that is measured by the balance.

  11. δ-Generalized Labeled Multi-Bernoulli Filter Using Amplitude Information of Neighboring Cells

    PubMed Central

    Liu, Chao; Lei, Peng; Qi, Yaolong

    2018-01-01

    The amplitude information (AI) of echoed signals plays an important role in radar target detection and tracking. A lot of research shows that the introduction of AI enables the tracking algorithm to distinguish targets from clutter better and then improves the performance of data association. The current AI-aided tracking algorithms only consider the signal amplitude in the range-azimuth cell where measurement exists. However, since radar echoes always contain backscattered signals from multiple cells, the useful information of neighboring cells would be lost if directly applying those existing methods. In order to solve this issue, a new δ-generalized labeled multi-Bernoulli (δ-GLMB) filter is proposed. It exploits the AI of radar echoes from neighboring cells to construct a united amplitude likelihood ratio, and then plugs it into the update process and the measurement-track assignment cost matrix of the δ-GLMB filter. Simulation results show that the proposed approach has better performance in target’s state and number estimation than that of the δ-GLMB only using single-cell AI in low signal-to-clutter-ratio (SCR) environment. PMID:29642595

  12. The Modelling of Axially Translating Flexible Beams

    NASA Astrophysics Data System (ADS)

    Theodore, R. J.; Arakeri, J. H.; Ghosal, A.

    1996-04-01

    The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.

  13. Symmetries and integrability of a fourth-order Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Bokhari, Ashfaque H.; Mahomed, F. M.; Zaman, F. D.

    2010-05-01

    The complete symmetry group classification of the fourth-order Euler-Bernoulli ordinary differential equation, where the elastic modulus and the area moment of inertia are constants and the applied load is a function of the normal displacement, is obtained. We perform the Lie and Noether symmetry analysis of this problem. In the Lie analysis, the principal Lie algebra which is one dimensional extends in four cases, viz. the linear, exponential, general power law, and a negative fractional power law. It is further shown that two cases arise in the Noether classification with respect to the standard Lagrangian. That is, the linear case for which the Noether algebra dimension is one less than the Lie algebra dimension as well as the negative fractional power law. In the latter case the Noether algebra is three dimensional and is isomorphic to the Lie algebra which is sl(2,R). This exceptional case, although admitting the nonsolvable algebra sl(2,R), remarkably allows for a two-parameter family of exact solutions via the Noether integrals. The Lie reduction gives a second-order ordinary differential equation which has nonlocal symmetry.

  14. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  15. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    PubMed Central

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  16. Examining the role of unmeasured confounding in mediation analysis with genetic and genomic applications.

    PubMed

    Lutz, Sharon M; Thwing, Annie; Schmiege, Sarah; Kroehl, Miranda; Baker, Christopher D; Starling, Anne P; Hokanson, John E; Ghosh, Debashis

    2017-07-19

    In mediation analysis if unmeasured confounding is present, the estimates for the direct and mediated effects may be over or under estimated. Most methods for the sensitivity analysis of unmeasured confounding in mediation have focused on the mediator-outcome relationship. The Umediation R package enables the user to simulate unmeasured confounding of the exposure-mediator, exposure-outcome, and mediator-outcome relationships in order to see how the results of the mediation analysis would change in the presence of unmeasured confounding. We apply the Umediation package to the Genetic Epidemiology of Chronic Obstructive Pulmonary Disease (COPDGene) study to examine the role of unmeasured confounding due to population stratification on the effect of a single nucleotide polymorphism (SNP) in the CHRNA5/3/B4 locus on pulmonary function decline as mediated by cigarette smoking. Umediation is a flexible R package that examines the role of unmeasured confounding in mediation analysis allowing for normally distributed or Bernoulli distributed exposures, outcomes, mediators, measured confounders, and unmeasured confounders. Umediation also accommodates multiple measured confounders, multiple unmeasured confounders, and allows for a mediator-exposure interaction on the outcome. Umediation is available as an R package at https://github.com/SharonLutz/Umediation A tutorial on how to install and use the Umediation package is available in the Additional file 1.

  17. Modelling lifetime data with multivariate Tweedie distribution

    NASA Astrophysics Data System (ADS)

    Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah

    2017-05-01

    This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.

  18. Modeling and Simulation of Upset-Inducing Disturbances for Digital Systems in an Electromagnetic Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.

  19. Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan

    2016-10-01

    The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.

  20. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy.

    PubMed

    Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L

    2017-05-07

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  1. Esophageal wall dose-surface maps do not improve the predictive performance of a multivariable NTCP model for acute esophageal toxicity in advanced stage NSCLC patients treated with intensity-modulated (chemo-)radiotherapy

    NASA Astrophysics Data System (ADS)

    Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.

    2017-05-01

    In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade  ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC  =  0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.

  2. The PIT-trap-A "model-free" bootstrap procedure for inference about regression models with discrete, multivariate responses.

    PubMed

    Warton, David I; Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)-common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of "model-free bootstrap", adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods.

  3. The PIT-trap—A “model-free” bootstrap procedure for inference about regression models with discrete, multivariate responses

    PubMed Central

    Thibaut, Loïc; Wang, Yi Alice

    2017-01-01

    Bootstrap methods are widely used in statistics, and bootstrapping of residuals can be especially useful in the regression context. However, difficulties are encountered extending residual resampling to regression settings where residuals are not identically distributed (thus not amenable to bootstrapping)—common examples including logistic or Poisson regression and generalizations to handle clustered or multivariate data, such as generalised estimating equations. We propose a bootstrap method based on probability integral transform (PIT-) residuals, which we call the PIT-trap, which assumes data come from some marginal distribution F of known parametric form. This method can be understood as a type of “model-free bootstrap”, adapted to the problem of discrete and highly multivariate data. PIT-residuals have the key property that they are (asymptotically) pivotal. The PIT-trap thus inherits the key property, not afforded by any other residual resampling approach, that the marginal distribution of data can be preserved under PIT-trapping. This in turn enables the derivation of some standard bootstrap properties, including second-order correctness of pivotal PIT-trap test statistics. In multivariate data, bootstrapping rows of PIT-residuals affords the property that it preserves correlation in data without the need for it to be modelled, a key point of difference as compared to a parametric bootstrap. The proposed method is illustrated on an example involving multivariate abundance data in ecology, and demonstrated via simulation to have improved properties as compared to competing resampling methods. PMID:28738071

  4. Eliciting expert opinion for economic models: an applied example.

    PubMed

    Leal, José; Wordsworth, Sarah; Legood, Rosa; Blair, Edward

    2007-01-01

    Expert opinion is considered as a legitimate source of information for decision-analytic modeling where required data are unavailable. Our objective was to develop a practical computer-based tool for eliciting expert opinion about the shape of the uncertainty distribution around individual model parameters. We first developed a prepilot survey with departmental colleagues to test a number of alternative approaches to eliciting opinions on the shape of the uncertainty distribution around individual parameters. This information was used to develop a survey instrument for an applied clinical example. This involved eliciting opinions from experts to inform a number of parameters involving Bernoulli processes in an economic model evaluating DNA testing for families with a genetic disease, hypertrophic cardiomyopathy. The experts were cardiologists, clinical geneticists, and laboratory scientists working with cardiomyopathy patient populations and DNA testing. Our initial prepilot work suggested that the more complex elicitation techniques advocated in the literature were difficult to use in practice. In contrast, our approach achieved a reasonable response rate (50%), provided logical answers, and was generally rated as easy to use by respondents. The computer software user interface permitted graphical feedback throughout the elicitation process. The distributions obtained were incorporated into the model, enabling the use of probabilistic sensitivity analysis. There is clearly a gap in the literature between theoretical elicitation techniques and tools that can be used in applied decision-analytic models. The results of this methodological study are potentially valuable for other decision analysts deriving expert opinion.

  5. Perception of Randomness: On the Time of Streaks

    ERIC Educational Resources Information Center

    Sun, Yanlong; Wang, Hongbin

    2010-01-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the…

  6. Fluid Structure Modeling and SImulation of a Modified KC-135R Icing Tanker Boom

    DTIC Science & Technology

    2013-01-07

    representative boom. Bernoulli beam elements with six degrees of freedom per node are used to model the water tubes. Each tube was discretized with 101... ball vertex spring analogy and leverages the ALE formulation of AERO-F. The number of increments used to deform the mesh in the vicinity of the

  7. Fluid-Structure Modeling and Simulation of a Modified KC-135R Icing Tanker Boom

    DTIC Science & Technology

    2013-01-07

    representative boom. Bernoulli beam elements with six degrees of freedom per node are used to model the water tubes. Each tube was discretized with 101... ball vertex spring analogy and leverages the ALE formulation of AERO-F. The number of increments used to deform the mesh in the vicinity of the

  8. Creating a Project on Difference Equations with Primary Sources: Challenges and Opportunities

    ERIC Educational Resources Information Center

    Ruch, David

    2014-01-01

    This article discusses the creation of a student project about linear difference equations using primary sources. Early 18th-century developments in the area are outlined, focusing on efforts by Abraham De Moivre (1667-1754) and Daniel Bernoulli (1700-1782). It is explained how primary sources from these authors can be used to cover material…

  9. Using PISA 2003, Examining the Factors Affecting Students' Mathematics Achievement

    ERIC Educational Resources Information Center

    Demir, Ibrahim; Kilic, Serpil

    2010-01-01

    The purpose of this study is to examine the effects of learning strategies on mathematics achievement. The sample was compiled from students who participated in Programme for International Student Assessment (PISA) in Turkey. The data consisted of 4493 15 years old Turkish students in 158 schools, and analyzed by two levels Bernoulli model as a…

  10. The Physics of Flight: I. Fixed and Rotating Wings

    ERIC Educational Resources Information Center

    Linton, J. Oliver

    2007-01-01

    Almost all elementary textbook explanations of the theory of flight rely heavily on Bernoulli's principle and the fact that air travels faster over a wing than below it. In recent years the inadequacies and, indeed, fallacies in this explanation have been exposed (see Babinsky's excellent article in 2003 Phys. Educ. 38 497-503) and it is now…

  11. Degenerate Cauchy numbers of the third kind.

    PubMed

    Pyo, Sung-Soo; Kim, Taekyun; Rim, Seog-Hoon

    2018-01-01

    Since Cauchy numbers were introduced, various types of Cauchy numbers have been presented. In this paper, we define degenerate Cauchy numbers of the third kind and give some identities for the degenerate Cauchy numbers of the third kind. In addition, we give some relations between four kinds of the degenerate Cauchy numbers, the Daehee numbers and the degenerate Bernoulli numbers.

  12. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  13. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  14. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  15. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  16. Bayesian Estimation of Multivariate Latent Regression Models: Gauss versus Laplace

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew; Park, Trevor

    2017-01-01

    A latent multivariate regression model is developed that employs a generalized asymmetric Laplace (GAL) prior distribution for regression coefficients. The model is designed for high-dimensional applications where an approximate sparsity condition is satisfied, such that many regression coefficients are near zero after accounting for all the model…

  17. Evaluation of Meterorite Amono Acid Analysis Data Using Multivariate Techniques

    NASA Technical Reports Server (NTRS)

    McDonald, G.; Storrie-Lombardi, M.; Nealson, K.

    1999-01-01

    The amino acid distributions in the Murchison carbonaceous chondrite, Mars meteorite ALH84001, and ice from the Allan Hills region of Antarctica are shown, using a multivariate technique known as Principal Component Analysis (PCA), to be statistically distinct from the average amino acid compostion of 101 terrestrial protein superfamilies.

  18. Conducting Privacy-Preserving Multivariable Propensity Score Analysis When Patient Covariate Information Is Stored in Separate Locations.

    PubMed

    Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian

    2017-03-15

    Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Fire and Water Demonstrate Law

    ERIC Educational Resources Information Center

    de Luca, R.; Ganci, S.

    2008-01-01

    In this article, the authors describe two classroom experiments that can be interpreted by means of Bernoulli's law. The first experiment uses a lighted candle in front of a mirror and a stream of air that is sent obliquely towards the mirror. The purpose of this experiment is to find out which way the flame will bend if air is blown at a given…

  20. Tracks in the Sand: Hooke's Pendulum "Cum Grano Salis"

    ERIC Educational Resources Information Center

    Babovic, Vukota; Babovic, Miloš

    2014-01-01

    The history of science remembers more than just formal facts about scientific discoveries. These side stories are often inspiring. One of them, the story of an unfulfilled death wish of Jacob Bernoulli regarding spirals, inspired us to look around ourselves. And we saw natural spirals around us, which led to the creation of a Hooke's…

  1. Flutter Instability of a Fluid-Conveying Fluid-Immersed Pipe Affixed to a Rigid Body

    DTIC Science & Technology

    2011-01-01

    rigid body, denoted by y in Fig. 4, is small. This is in addition to the Euler– Bernoulli beam assumption that the slope of the tail is small everywhere...here. These include the efficiency with which the prime mover can generate fluid momentum , pipe losses, and external drag acting on both the hull and the

  2. Bernoulli potential in type-I and weak type-II superconductors: II. Surface dipole

    NASA Astrophysics Data System (ADS)

    Lipavský, P.; Morawetz, K.; Koláček, J.; Mareš, J. J.; Brandt, E. H.; Schreiber, M.

    2004-09-01

    The Budd-Vannimenus theorem is modified to apply to superconductors in the Meissner state. The obtained identity links the surface value of the electrostatic potential to the density of free energy at the surface which allows one to evaluate the electrostatic potential observed via the capacitive pickup without the explicit solution of the charge profile.

  3. Focusing on the Nature of Causality in a Unit on Pressure: How Does It Affect Student Understanding?

    ERIC Educational Resources Information Center

    Basca, Belinda B.; Grotzer, Tina A.

    Although pressure forms the basis for understanding topics such as the internal structure of the earth, weather cycles, rock formation, Bernoulli's principle, and plate tectonics, the presence of this concept in the school curriculum is at a minimal level. This paper suggests that the ideas, misconceptions, and perceptions of students have to do…

  4. Non-contact handling device

    DOEpatents

    Reece, Mark [Albuquerque, NM; Knorovsky, Gerald A [Albuquerque, NM; MacCallum, Danny O [Edgewood, NM

    2007-05-15

    A pressurized fluid handling nozzle has a body with a first end and a second end, a fluid conduit and a recess at the second end. The first end is configured for connection to a pressurized fluid source. The fluid conduit has an inlet at the first end and an outlet at the recess. The nozzle uses the Bernoulli effect for lifting a part.

  5. Free Vibration Analysis of DWCNTs Using CDM and Rayleigh-Schmidt Based on Nonlocal Euler-Bernoulli Beam Theory

    PubMed Central

    2014-01-01

    The free vibration response of double-walled carbon nanotubes (DWCNTs) is investigated. The DWCNTs are modelled as two beams, interacting between them through the van der Waals forces, and the nonlocal Euler-Bernoulli beam theory is used. The governing equations of motion are derived using a variational approach and the free frequencies of vibrations are obtained employing two different approaches. In the first method, the two double-walled carbon nanotubes are discretized by means of the so-called “cell discretization method” (CDM) in which each nanotube is reduced to a set of rigid bars linked together by elastic cells. The resulting discrete system takes into account nonlocal effects, constraint elasticities, and the van der Waals forces. The second proposed approach, belonging to the semianalytical methods, is an optimized version of the classical Rayleigh quotient, as proposed originally by Schmidt. The resulting conditions are solved numerically. Numerical examples end the paper, in which the two approaches give lower-upper bounds to the true values, and some comparisons with existing results are offered. Comparisons of the present numerical results with those from the open literature show an excellent agreement. PMID:24715807

  6. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  7. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  8. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  9. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  10. Dissipative advective accretion disc solutions with variable adiabatic index around black holes

    NASA Astrophysics Data System (ADS)

    Kumar, Rajiv; Chattopadhyay, Indranil

    2014-10-01

    We investigated accretion on to black holes in presence of viscosity and cooling, by employing an equation of state with variable adiabatic index and multispecies fluid. We obtained the expression of generalized Bernoulli parameter which is a constant of motion for an accretion flow in presence of viscosity and cooling. We obtained all possible transonic solutions for a variety of boundary conditions, viscosity parameters and accretion rates. We identified the solutions with their positions in the parameter space of generalized Bernoulli parameter and the angular momentum on the horizon. We showed that a shocked solution is more luminous than a shock-free one. For particular energies and viscosity parameters, we obtained accretion disc luminosities in the range of 10- 4 - 1.2 times Eddington luminosity, and the radiative efficiency seemed to increase with the mass accretion rate too. We found steady state shock solutions even for high-viscosity parameters, high accretion rates and for wide range of composition of the flow, starting from purely electron-proton to lepton-dominated accretion flow. However, similar to earlier studies of inviscid flow, accretion shock was not obtained for electron-positron pair plasma.

  11. Size-dependent geometrically nonlinear free vibration analysis of fractional viscoelastic nanobeams based on the nonlocal elasticity theory

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Faraji Oskouie, M.; Gholami, R.

    2016-01-01

    In recent decades, mathematical modeling and engineering applications of fractional-order calculus have been extensively utilized to provide efficient simulation tools in the field of solid mechanics. In this paper, a nonlinear fractional nonlocal Euler-Bernoulli beam model is established using the concept of fractional derivative and nonlocal elasticity theory to investigate the size-dependent geometrically nonlinear free vibration of fractional viscoelastic nanobeams. The non-classical fractional integro-differential Euler-Bernoulli beam model contains the nonlocal parameter, viscoelasticity coefficient and order of the fractional derivative to interpret the size effect, viscoelastic material and fractional behavior in the nanoscale fractional viscoelastic structures, respectively. In the solution procedure, the Galerkin method is employed to reduce the fractional integro-partial differential governing equation to a fractional ordinary differential equation in the time domain. Afterwards, the predictor-corrector method is used to solve the nonlinear fractional time-dependent equation. Finally, the influences of nonlocal parameter, order of fractional derivative and viscoelasticity coefficient on the nonlinear time response of fractional viscoelastic nanobeams are discussed in detail. Moreover, comparisons are made between the time responses of linear and nonlinear models.

  12. Exact solutions for the static bending of Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model

    NASA Astrophysics Data System (ADS)

    Wang, Y. B.; Zhu, X. W.; Dai, H. H.

    2016-08-01

    Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.

  13. Theoretical and Experimental Evaluation of the Bond Strength Under Peeling Loads

    NASA Technical Reports Server (NTRS)

    Nayeb-Hashemi, Hamid; Jawad, Oussama Cherkaoui

    1997-01-01

    Reliable applications of adhesively bonded joints require understanding of the stress distribution along the bond-line and the stresses that are responsible for the joint failure. To properly evaluate factors affecting peel strength, effects of defects such as voids on the stress distribution in the overlap region must be understood. In this work, the peel stress distribution in a single lap joint is derived using a strength of materials approach. The bonded joint is modeled as Euler-Bernoulli beams, bonded together with an adhesive. which is modeled as an elastic foundation which can resist both peel and shear stresses. It is found that for certain adhesive and adherend geometries and properties, a central void with the size up to 50 percent of the overlap length has negligible effect on the peak peel and shear stresses. To verify the solutions obtained from the model, the problem is solved again by using the finite element method and by treating the adherends and the adhesive as elastic materials. It is found that the model used in the analysis not only predicts the correct trend for the peel stress distribution but also gives rather surprisingly close results to that of the finite element analysis. It is also found that both shear and peel stresses can be responsible for the joint performance and when a void is introduced, both of these stresses can contribute to the joint failure as the void size increases. Acoustic emission (AE) activities of aluminum-adhesive-aluminum specimens with different void sizes were monitored. The AE ringdown counts and energy were very sensitive and decreased significantly with the void size. It was observed that the AE events were shifting towards the edge of the overlap where the maximum peeling and shearing stresses were occurring as the void size increased.

  14. DIFFUSIVE PARTICLE ACCELERATION IN SHOCKED, VISCOUS ACCRETION DISKS: GREEN'S FUNCTION ENERGY DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Peter A.; Das, Santabrata; Le, Truong, E-mail: pbecker@gmu.edu, E-mail: sbdas@iitg.ernet.in, E-mail: truong.le@nhrec.org

    2011-12-10

    The acceleration of relativistic particles in a viscous accretion disk containing a standing shock is investigated as a possible explanation for the energetic outflows observed around radio-loud black holes. The energy/space distribution of the accelerated particles is computed by solving a transport equation that includes the effects of first-order Fermi acceleration, bulk advection, spatial diffusion, and particle escape. The velocity profile of the accreting gas is described using a model for shocked viscous disks recently developed by the authors, and the corresponding Green's function distribution for the accelerated particles in the disk and the outflow is obtained using a classicalmore » method based on eigenfunction analysis. The accretion-driven, diffusive shock acceleration scenario explored here is conceptually similar to the standard model for the acceleration of cosmic rays at supernova-driven shocks. However, in the disk application, the distribution of the accelerated particles is much harder than would be expected for a plane-parallel shock with the same compression ratio. Hence the disk environment plays a key role in enhancing the efficiency of the shock acceleration process. The presence of the shock helps to stabilize the disk by reducing the Bernoulli parameter, while channeling the excess binding energy into the escaping relativistic particles. In applications to M87 and Sgr A*, we find that the kinetic power in the jet is {approx}0.01 M-dot c{sup 2}, and the outflowing relativistic particles have a mean energy {approx}300 times larger than that of the thermal gas in the disk at the shock radius. Our results suggest that a standing shock may be an essential ingredient in accretion onto underfed black holes, helping to resolve the long-standing problem of the stability of advection-dominated accretion disks.« less

  15. Creating Hierarchical Pores by Controlled Linker Thermolysis in Multivariate Metal-Organic Frameworks.

    PubMed

    Feng, Liang; Yuan, Shuai; Zhang, Liang-Liang; Tan, Kui; Li, Jia-Luo; Kirchon, Angelo; Liu, Ling-Mei; Zhang, Peng; Han, Yu; Chabal, Yves J; Zhou, Hong-Cai

    2018-02-14

    Sufficient pore size, appropriate stability, and hierarchical porosity are three prerequisites for open frameworks designed for drug delivery, enzyme immobilization, and catalysis involving large molecules. Herein, we report a powerful and general strategy, linker thermolysis, to construct ultrastable hierarchically porous metal-organic frameworks (HP-MOFs) with tunable pore size distribution. Linker instability, usually an undesirable trait of MOFs, was exploited to create mesopores by generating crystal defects throughout a microporous MOF crystal via thermolysis. The crystallinity and stability of HP-MOFs remain after thermolabile linkers are selectively removed from multivariate metal-organic frameworks (MTV-MOFs) through a decarboxylation process. A domain-based linker spatial distribution was found to be critical for creating hierarchical pores inside MTV-MOFs. Furthermore, linker thermolysis promotes the formation of ultrasmall metal oxide nanoparticles immobilized in an open framework that exhibits high catalytic activity for Lewis acid-catalyzed reactions. Most importantly, this work provides fresh insights into the connection between linker apportionment and vacancy distribution, which may shed light on probing the disordered linker apportionment in multivariate systems, a long-standing challenge in the study of MTV-MOFs.

  16. Multivariate probability distribution for sewer system vulnerability assessment under data-limited conditions.

    PubMed

    Del Giudice, G; Padulano, R; Siciliano, D

    2016-01-01

    The lack of geometrical and hydraulic information about sewer networks often excludes the adoption of in-deep modeling tools to obtain prioritization strategies for funds management. The present paper describes a novel statistical procedure for defining the prioritization scheme for preventive maintenance strategies based on a small sample of failure data collected by the Sewer Office of the Municipality of Naples (IT). Novelty issues involve, among others, considering sewer parameters as continuous statistical variables and accounting for their interdependences. After a statistical analysis of maintenance interventions, the most important available factors affecting the process are selected and their mutual correlations identified. Then, after a Box-Cox transformation of the original variables, a methodology is provided for the evaluation of a vulnerability map of the sewer network by adopting a joint multivariate normal distribution with different parameter sets. The goodness-of-fit is eventually tested for each distribution by means of a multivariate plotting position. The developed methodology is expected to assist municipal engineers in identifying critical sewers, prioritizing sewer inspections in order to fulfill rehabilitation requirements.

  17. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  18. A Simpli ed, General Approach to Simulating from Multivariate Copula Functions

    Treesearch

    Barry Goodwin

    2012-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...

  19. Bayesian bivariate meta-analysis of correlated effects: Impact of the prior distributions on the between-study correlation, borrowing of strength, and joint inferences

    PubMed Central

    Bujkiewicz, Sylwia; Riley, Richard D

    2016-01-01

    Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929

  20. Mechanical and Thermal Analysis of Classical Functionally Graded Coated Beam

    NASA Astrophysics Data System (ADS)

    Toudehdehghan, Abdolreza; Mujibur Rahman, Md.; Tarlochan, Faris

    2018-03-01

    The governing equation of a classical rectangular coated beam made of two layers subjected to thermal and uniformly distributed mechanical loads are derived by using the principle of virtual displacements and based on Euler-Bernoulli deformation beam theory (EBT). The aim of this paper was to analyze the static behavior of clamped-clamped thin coated beam under thermo-mechanical load using MATLAB. Two models were considered for composite coated. The first model was consisting of ceramic layer as a coated and substrate which was metal (HC model). The second model was consisting of Functionally Graded Material (FGM) as a coated layer and metal substrate (FGC model). From the result it was apparent that the superiority of the FGC composite against conventional coated composite has been demonstrated. From the analysis, the stress level throughout the thickness at the interface of the coated beam for the FGC was reduced. Yet, the deflection in return was observed to increase. Therefore, this could cater to various new engineering applications where warrant the utilization of material that has properties that are well-beyond the capabilities of the conventional or yesteryears materials.

  1. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  2. Entrance loss coefficients and exit coefficients for a physical model of the glottis with convergent angles

    PubMed Central

    Fulcher, Lewis P.; Scherer, Ronald C.; Anderson, Nicholas V.

    2014-01-01

    Pressure distributions were obtained for 5°, 10°, and 20° convergent angles with a static physical model (M5) of the glottis. Measurements were made for minimal glottal diameters from d = 0.005–0.32 cm with a range of transglottal pressures of interest for phonation. Entrance loss coefficients were calculated at the glottal entrance for each minimal diameter and transglottal pressure to measure how far the flows in this region deviate from Bernoulli flow. Exit coefficients were also calculated to determine the presence and magnitude of pressure recovery near the glottal exit. The entrance loss coefficients for the three convergent angles vary from values near 2.3–3.4 for d = 0.005 cm to values near 0.6 for d = 0.32 cm. These coefficients extend the tables of entrance loss and exit coefficients obtained for the uniform glottis according to Fulcher, Scherer, and Powell [J. Acoust. Soc. Am. 129, 1548–1553 (2011)]. PMID:25190404

  3. The influence of pressure relaxation on the structure of an axial vortex

    NASA Astrophysics Data System (ADS)

    Ash, Robert L.; Zardadkhan, Irfan; Zuckerwar, Allan J.

    2011-07-01

    Governing equations including the effects of pressure relaxation have been utilized to study an incompressible, steady-state viscous axial vortex with specified far-field circulation. When sound generation is attributed to a velocity gradient tensor-pressure gradient product, the modified conservation of momentum equations that result yield an exact solution for a steady, incompressible axial vortex. The vortex velocity profile has been shown to closely approximate experimental vortex measurements in air and water over a wide range of circulation-based Reynolds numbers. The influence of temperature and humidity on the pressure relaxation coefficient in air has been examined using theoretical and empirical approaches, and published axial vortex experiments have been employed to estimate the pressure relaxation coefficient in water. Non-equilibrium pressure gradient forces have been shown to balance the viscous stresses in the vortex core region, and the predicted pressure deficits that result from this non-equilibrium balance can be substantially larger than the pressure deficits predicted using a Bernoulli equation approach. Previously reported pressure deficit distributions for dust devils and tornados have been employed to validate the non-equilibrium pressure deficit predictions.

  4. Thin film flow along a periodically-stretched elastic beam

    NASA Astrophysics Data System (ADS)

    Boamah Mensah, Chris; Chini, Greg; Jensen, Oliver

    2017-11-01

    Motivated by an application to pulmonary alveolar micro-mechanics, a system of partial differential equations is derived that governs the motion of a thin liquid film lining both sides of an inertia-less elastic substrate. The evolution of the film mass distribution is described by invoking the usual lubrication approximation while the displacement of the substrate is determined by employing a kinematically nonlinear Euler-Bernoulli beam formulation. In the parameter regime of interest, the axial strain can be readily shown to be a linear function of arc-length specified completely by the motion of ends of the substrate. In contrast, the normal force balance on the beam yields an equation for the substrate curvature that is fully coupled to the time-dependent lubrication equation. Linear analyses of both a stationary and periodically-stretched flat substrate confirm the potential for buckling instabilities and reveal an upper bound on the dimensionless axial stiffness for which the coupled thin-film/inertial-less-beam model is well-posed. Numerical simulations of the coupled system are used to explore the nonlinear development of the buckling instabilities.

  5. Topic Model for Graph Mining.

    PubMed

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  6. Dynamical Localization for Discrete Anderson Dirac Operators

    NASA Astrophysics Data System (ADS)

    Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.

    2017-04-01

    We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.

  7. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    PubMed

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Integrated GIS and multivariate statistical analysis for regional scale assessment of heavy metal soil contamination: A critical review.

    PubMed

    Hou, Deyi; O'Connor, David; Nathanail, Paul; Tian, Li; Ma, Yan

    2017-12-01

    Heavy metal soil contamination is associated with potential toxicity to humans or ecotoxicity. Scholars have increasingly used a combination of geographical information science (GIS) with geostatistical and multivariate statistical analysis techniques to examine the spatial distribution of heavy metals in soils at a regional scale. A review of such studies showed that most soil sampling programs were based on grid patterns and composite sampling methodologies. Many programs intended to characterize various soil types and land use types. The most often used sampling depth intervals were 0-0.10 m, or 0-0.20 m, below surface; and the sampling densities used ranged from 0.0004 to 6.1 samples per km 2 , with a median of 0.4 samples per km 2 . The most widely used spatial interpolators were inverse distance weighted interpolation and ordinary kriging; and the most often used multivariate statistical analysis techniques were principal component analysis and cluster analysis. The review also identified several determining and correlating factors in heavy metal distribution in soils, including soil type, soil pH, soil organic matter, land use type, Fe, Al, and heavy metal concentrations. The major natural and anthropogenic sources of heavy metals were found to derive from lithogenic origin, roadway and transportation, atmospheric deposition, wastewater and runoff from industrial and mining facilities, fertilizer application, livestock manure, and sewage sludge. This review argues that the full potential of integrated GIS and multivariate statistical analysis for assessing heavy metal distribution in soils on a regional scale has not yet been fully realized. It is proposed that future research be conducted to map multivariate results in GIS to pinpoint specific anthropogenic sources, to analyze temporal trends in addition to spatial patterns, to optimize modeling parameters, and to expand the use of different multivariate analysis tools beyond principal component analysis (PCA) and cluster analysis (CA). Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Bayesian transformation cure frailty models with multivariate failure time data.

    PubMed

    Yin, Guosheng

    2008-12-10

    We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.

  10. Multivariate Distributions in Reliability Theory and Life Testing.

    DTIC Science & Technology

    1981-04-01

    Downton Distribution This distribution is a special case of a classical bivariate gamma distribution due to Wicksell and to Kibble. See Krishnaiah and...Krishnamoorthy and Parthasarathy (1951) (see also Krishnaiah and Rao (1961) and Krishnaiah (1977))and also within the frame- 13 work of the Arnold classes. A...for these distributions and their properties is Johnson and Kotz (1972). Krishnaiah (1977) has specifically discussed multi- variate gamma

  11. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  12. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Fish Manoeuvres and Morphology

    NASA Astrophysics Data System (ADS)

    Singh, Kiran; Pedley, Timothy

    2008-11-01

    The extraordinary manoeuvrability observed in many fish is attributed to their inherent flexibility, which might be enhanced by the use of appendages like fins. The aim of this work is to understand the role of morphological adaptations, such as body shape and deployment of median fins, on manoeuvrability and internal body dynamics. The 3d vortex lattice numerical method was employed to analyse the hydrodynamics for arbitrary body planforms of infinitesimal thickness. The internal structure of the body due to the combined skeletal system and soft tissue, is represented as an active Euler-Bernoulli beam, in which the time-dependent bending moment distribution is calculated from body inertia and the hydrodynamic pressure difference across the body. C-turns are the manoeuvre of choice for this work and the response for three different species of fish are examined. Angelfish(Pterophyllum eimekei), pike (Esox sp) and tuna (Thunnus albacares) were chosen for their differences in body profile, median fin use and manoeuvrability. Net direction change and bending moment response to prescribed backbone flexure are calculated and used to interpret the influence of body profile on manoeuvrability and muscle work done. Internal stresses may be computed from anatomical data on muscle fibre distribution and recruitment. To the future, it is intended to extend this work to other typical manoeuvres, such as fast starts for which muscle activation patterns have been measured quite widely.

  14. Intermittency of gravity wave momentum flux in the mesopause region observed with an all-sky airglow imager

    NASA Astrophysics Data System (ADS)

    Cao, Bing; Liu, Alan Z.

    2016-01-01

    The intermittency of gravity wave momentum flux (MF) near the OH airglow layer (˜87 km) in the mesopause region is investigated for the first time using observation of all-sky airglow imager over Maui, Hawaii (20.7°N, 156.3°W), and Cerro Pachón, Chile (30.3°S, 70.7°W). At both sites, the probability density function (pdf) of gravity wave MF shows two distinct distributions depending on the magnitude of the MF. For MF smaller (larger) than ˜16 m2 s-2 (0.091 mPa), the pdf follows a lognormal (power law) distribution. The intermittency represented by the Bernoulli proxy and the percentile ratio shows that gravity waves have higher intermittency at Maui than at Cerro Pachón, suggesting more intermittent background variation above Maui. It is found that most of the MF is contributed by waves that occur very infrequently. But waves that individually contribute little MF are also important because of their higher occurrence frequencies. The peak contribution is from waves with MF around ˜2.2 m2 s-2 at Cerro Pachón and ˜5.5 m2 s-2 at Maui. Seasonal variations of the pdf and intermittency imply that the background atmosphere has larger influence on the observed intermittency in the mesopause region.

  15. Risk analysis for occurrences of schistosomiasis in the coastal area of Porto de Galinhas, Pernambuco, Brazil

    PubMed Central

    2014-01-01

    Background Manson’s schistosomiasis continues to be a severe public health problem in Brazil, where thousands of people live under the risk of contracting this parasitosis. In the Northeast of Brazil, schistosomiasis has expanded from rural areas to the coast of Pernambuco State, where the intermediate host is Biomphalaria glabrata snails. This study aims at presenting situational analyses on schistosomiasis at the coastal locality of Porto de Galinhas, Pernambuco, Brazil, by determining the risk factors relating to its occurrence from the epidemiological and spatial perspectives. Methods In order to gather prevalence data, a parasitological census surveys were conducted in 2010 in the light of the Kato-Katz technique. Furthermore, malacological surveys were also conducted in the same years so as to define the density and infection rates of the intermediate host. Lastly, socioeconomic-behavioral survey was also conducted to determine the odds ratio for infection by Schistosoma mansoni. Based on these data, spatial analyses were done, resulting in maps of the risk of disease transmission. To predict the risk of schistosomiasis occurrence, a multivariate logistic regression was performed using R 2.13 software. Results Based on prevalence, malacological and socioeconomic-behavioural surveys, it was identified a prevalence of 15.7% in the investigated population (2,757 individuals). Due to the malacological survey, 36 breeding sites were identified, of which 11 were classified as foci of schistosomiasis transmission since they pointed out snails which were infected by Schistosoma mansoni. Overall, 11,012 snails (Biomphalaria glabrata) were collected. The multivariate regression model identified six explanatory variables of environmental, socioeconomic and demographic nature. Spatial sweep analysis by means of the Bernoulli method identified one statistically significant cluster in Salinas (RR = 2.2; p-value < 0.000), the district with the highest occurrence of cases. Conclusions Based on the resulting information from this study, the epidemiological dimensions of this disease are significant and severe, within the scenario of schistosomiasis in Pernambuco state. The risk factors which were identified in the predictive model made it clear that the environmental and social conditions influence on the schistosomiasis occurrences. PMID:24559264

  16. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  17. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  18. Science 101: What Makes a Curveball Curve?

    ERIC Educational Resources Information Center

    Robertson, William C.

    2009-01-01

    Ah, springtime, and young people's thoughts turn to... baseball, of course. But this column is not about "how" to throw a curveball, so you'll have to look that up on your own. Here, the focus is on the "why" of the curveball. There are two different things that cause a spinning ball to curve. One is known as the "Bernoulli effect" and the other…

  19. Singularities and non-hyperbolic manifolds do not coincide

    NASA Astrophysics Data System (ADS)

    Simányi, Nándor

    2013-06-01

    We consider the billiard flow of elastically colliding hard balls on the flat ν-torus (ν ⩾ 2), and prove that no singularity manifold can even locally coincide with a manifold describing future non-hyperbolicity of the trajectories. As a corollary, we obtain the ergodicity (actually the Bernoulli mixing property) of all such systems, i.e. the verification of the Boltzmann-Sinai ergodic hypothesis.

  20. Reshaping USAF Culture and Strategy: Lasting Themes and Emerging Trends

    DTIC Science & Technology

    2011-12-12

    operations are well-rooted in the air and space experience, near space concepts have struggled to develop the organizational 22 momentum ...space). Nevertheless, by July 2005, the near space concept had achieved sufficient momentum for General Lance Lord (then Commander of Air Force... Bernoulli ) the vertical dimension. Although operating at the upper reaches of the atmosphere, near space flight is bound by Bernoulian principles. The

  1. Bifurcation of rupture path by linear and cubic damping force

    NASA Astrophysics Data System (ADS)

    Dennis L. C., C.; Chew X., Y.; Lee Y., C.

    2014-06-01

    Bifurcation of rupture path is studied for the effect of linear and cubic damping. Momentum equation with Rayleigh factor was transformed into ordinary differential form. Bernoulli differential equation was obtained and solved by the separation of variables. Analytical or exact solutions yielded the bifurcation was visible at imaginary part when the wave was non dispersive. For the dispersive wave, bifurcation of rupture path was invisible.

  2. Boundary Layer Measurements in the Trisonic Gas-dynamics Facility Using Particle Image Velocimetery with CO2 Seeding

    DTIC Science & Technology

    2012-03-22

    understanding of fluid mechanics and aircraft design. The fundamental theories, concepts and equations developed by men like Newton, Bernoulli ...resulting instantaneous flow field data from PIV, boundary layer effects, turbulence characteristics, vortex formation, and momentum thickness, for...divided by the momentum thickness, δ2, and displacement thickness, δ1, as seen in Equations (2.8) and (2.9

  3. Identities associated with Milne-Thomson type polynomials and special numbers.

    PubMed

    Simsek, Yilmaz; Cakic, Nenad

    2018-01-01

    The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.

  4. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  5. Aerodynamics: The Wright Way

    NASA Technical Reports Server (NTRS)

    Cole, Jennifer Hansen

    2010-01-01

    This slide presentation reviews some of the basic principles of aerodynamics. Included in the presentation are: a few demonstrations of the principles, an explanation of the concepts of lift, drag, thrust and weight, a description of Bernoulli's principle, the concept of the airfoil (i.e., the shape of the wing) and how that effects lift, and the method of controlling an aircraft by manipulating the four forces using control surfaces.

  6. Approximating Multivariate Normal Orthant Probabilities. ONR Technical Report. [Biometric Lab Report No. 90-1.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…

  7. Multivariate Generalizations of Student's t-Distribution. ONR Technical Report. [Biometric Lab Report No. 90-3.

    ERIC Educational Resources Information Center

    Gibbons, Robert D.; And Others

    In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…

  8. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  9. Simulating Univariate and Multivariate Burr Type IIII and Type XII Distributions through the Method of L-Moments

    ERIC Educational Resources Information Center

    Pant, Mohan Dev

    2011-01-01

    The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…

  10. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  11. A note on a simplified and general approach to simulating from multivariate copula functions

    Treesearch

    Barry K. Goodwin

    2013-01-01

    Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses ‘Probability-...

  12. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  13. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  14. SATA Stochastic Algebraic Topology and Applications

    DTIC Science & Technology

    2017-01-23

    Harris et al. Selective sampling after solving a convex problem". arXiv:1609.05609 [ math , stat] (Sept. 2016). arXiv: 1609.05609. 13. Baryshnikov...Functions, Adv. Math . 245, 573-586, 2014. 15. Y. Baryshnikov, Liberzon, Daniel,Robust stability conditions for switched linear systems: Commutator bounds...Consistency via Kernel Estimation, arXiv:1407.5272 [ math , stat] (July 2014) arXiv: 1407.5272. to appear in Bernoulli 18. O.Bobrowski and S.Weinberger

  15. The time resolution of the St Petersburg paradox

    PubMed Central

    Peters, Ole

    2011-01-01

    A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904

  16. Calculation of upper confidence bounds on proportion of area containing not-sampled vegetation types: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2011-01-01

    This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....

  17. A Depth-Averaged 2-D Simulation for Coastal Barrier Breaching Processes

    DTIC Science & Technology

    2011-05-01

    including bed change and variable flow density in the flow continuity and momentum equations. The model adopts the HLL approximate Riemann solver to handle...flow density in the flow continuity and momentum equations. The model adopts the HLL approximate Riemann solver to handle the mixed-regime flows near...18 547 Keulegan equation or the Bernoulli equation, and the breach morphological change is determined using simplified sediment transport models

  18. Mid-IR Lasers: Challenges Imposed by the Population Dynamics of the Gain System

    DTIC Science & Technology

    2010-09-01

    MicroSystems (IOMS) Central-Field Approximation: Perturbations 1. a) Non-centrosymmetric splitting (Coulomb interaction) ⇒ total orbital angular momentum b...Accordingly: ⇒ total electron-spin momentum 2. Spin-orbit coupling (“LS” coupling) ⇒ total angular momentum lanthanides: intermediate coupling (LS / jj) 3...MicroSystems (IOMS) Luminescence Decay Curves Rate-equation for decay: Solution ( Bernoulli -Eq.): Linearized solution: T. Jensen, Ph.D. Thesis, Univ. Hamburg

  19. 1998 Physical Acoustics Summer School (PASS 98). Volume III: Background Materials.

    DTIC Science & Technology

    1998-01-01

    propagating hydrodynamic soliton ■ Shock waves, N waves, and sound eating sound ■ Acoustic Bernoulli effect ■ Acoustic levitation ■ Acoustic match ...cess. The resulting saturation values are given in the diagrams and nicely match the values given in (10). Delay reconstructions using the experimen...VOLUME 47, NUMBER 20 PHYSICAL REVIEW LETTERS 16 NOVEMBER 1981 oscillations of the driving sound field match three oscillations of the natural

  20. Computational Methods for Design, Control and Optimization

    DTIC Science & Technology

    2007-10-01

    Krueger Eugene M. Cliff Hoan Nguyen Traian Iliescu John Singler James Vance Eric Vugrin Adam Childers Dan Sutton References [11 J. T. Borggaard, S...Control, 45th IEEE Conference on Decision and Control, accepted. [11] L. C. Berselli, T. Iliescu and W. J. Layton , Mathematics of Large Eddy...Daniel Inman, Eric Ruggiero and John Singler, Finite Element For- mulation for Static Control of a Thin Euler-Bernoulli Beam Using Piezoelectric

  1. Micropolar curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for micropolar plane curved rods have been developed. 2-D theory is developed from general 2-D equations of linear micropolar elasticity using a special curvilinear system of coordinates related to the middle line of the rod and special hypothesis based on assumptions that take into account the fact that the rod is thin.High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First stress and strain tensors,vectors of displacements and rotation and body force shave been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then in the same way as in the theory of elasticity, system of differential equations in term of displacements and boundary conditions for Fourier coefficients have been obtained. The Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and 2-D equations of linear micropolar elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scale when taking in to account micropolar couple stress and rotation effects.

  2. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.

    2000-01-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  3. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  4. A novel approach to enhance the accuracy of vibration control of Frames

    NASA Astrophysics Data System (ADS)

    Toloue, Iraj; Shahir Liew, Mohd; Harahap, I. S. H.; Lee, H. E.

    2018-03-01

    All structures built within known seismically active regions are typically designed to endure earthquake forces. Despite advances in earthquake resistant structures, it can be inferred from hindsight that no structure is entirely immune to damage from earthquakes. Active vibration control systems, unlike the traditional methods which enlarge beams and columns, are highly effective countermeasures to reduce the effects of earthquake loading on a structure. It requires fast computation of nonlinear structural analysis in near time and has historically demanded advanced programming hosted on powerful computers. This research aims to develop a new approach for active vibration control of frames, which is applicable over both elastic and plastic material behavior. In this study, the Force Analogy Method (FAM), which is based on Hook's Law is further extended using the Timoshenko element which considers shear deformations to increase the reliability and accuracy of the controller. The proposed algorithm is applied to a 2D portal frame equipped with linear actuator, which is designed based on full state Linear Quadratic Regulator (LQR). For comparison purposes, the portal frame is analysed by both the Euler Bernoulli and Timoshenko element respectively. The results clearly demonstrate the superiority of the Timoshenko element over Euler Bernoulli for application in nonlinear analysis.

  5. Hydraulic pressures generated in magnetic ionic liquids by paramagnetic fluid/air interfaces inside of uniform tangential magnetic fields.

    PubMed

    Scovazzo, Paul; Portugal, Carla A M; Rosatella, Andreia A; Afonso, Carlos A M; Crespo, João G

    2014-08-15

    Magnetic Ionic Liquid (MILs), novel magnetic molecules that form "pure magnetic liquids," will follow the Ferrohydrodynamic Bernoulli Relationship. Based on recent literature, the modeling of this fluid system is an open issue and potentially controversial. We imposed uniform magnetic fields parallel to MIL/air interfaces where the capillary forces were negligible, the Quincke Problem. The size and location of the bulk fluid as well as the size and location of the fluid/air interface inside of the magnetic field were varied. MIL properties varied included the density, magnetic susceptibility, chemical structure, and magnetic element. Uniform tangential magnetic fields pulled the MILs up counter to gravity. The forces per area were not a function of the volume, the surface area inside of the magnetic field, or the volume displacement. However, the presence of fluid/air interfaces was necessary for the phenomena. The Ferrohydrodynamic Bernoulli Relationship predicted the phenomena with the forces being directly related to the fluid's volumetric magnetic susceptibility and the square of the magnetic field strength. [emim][FeCl4] generated the greatest hydraulic head (64-mm or 910 Pa at 1.627 Tesla). This work could aid in experimental design, when free surfaces are involved, and in the development of MIL applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies.

    PubMed

    Firstenberg, M S; Greenberg, N L; Smedira, N G; Prior, D L; Scalia, G M; Thomas, J D; Garcia, M J

    2000-08-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  7. On chemical distances and shape theorems in percolation models with long-range correlations

    NASA Astrophysics Data System (ADS)

    Drewitz, Alexander; Ráth, Balázs; Sapozhnikov, Artëm

    2014-08-01

    In this paper, we provide general conditions on a one parameter family of random infinite subsets of {{Z}}^d to contain a unique infinite connected component for which the chemical distances are comparable to the Euclidean distance. In addition, we show that these conditions also imply a shape theorem for the corresponding infinite connected component. By verifying these conditions for specific models, we obtain novel results about the structure of the infinite connected component of the vacant set of random interlacements and the level sets of the Gaussian free field. As a byproduct, we obtain alternative proofs to the corresponding results for random interlacements in the work of Černý and Popov ["On the internal distance in the interlacement set," Electron. J. Probab. 17(29), 1-25 (2012)], and while our main interest is in percolation models with long-range correlations, we also recover results in the spirit of the work of Antal and Pisztora ["On the chemical distance for supercritical Bernoulli percolation," Ann Probab. 24(2), 1036-1048 (1996)] for Bernoulli percolation. Finally, as a corollary, we derive new results about the (chemical) diameter of the largest connected component in the complement of the trace of the random walk on the torus.

  8. Endoscopic evaluation of therapeutic effects of "Anuloma-Viloma Pranayama" in Pratishyaya w.s.r. to mucociliary clearance mechanism and Bernoulli's principle.

    PubMed

    Bhardwaj, Atul; Sharma, Mahendra Kumar; Gupta, Manoj

    2013-10-01

    The current endeavor intended to evaluate the effectiveness and mode of action of Anuloma-Viloma Pranayama (AVP), i.e., alternate nasal breathing exercise, in resolving clinical features of Pratishyaya, i.e., rhinosinusitis. The present study was directed to validate the use of classical "saccharin test" in measuring the nasal health by measuring mucociliary clearance time. This study also highlights the effects of AVP by application of Bernoulli principle in ventilation of paranasal sinuses and surface oxygenation of nasal and paranasal sinuses ciliary epithelium. Clinically, endoscopically and radiologically diagnosed patients of Pratishyaya, i.e., rhinosinusitis, satisfying the inclusion criteria were selected to perform AVP as a breathing exercise regularly for 30 min every day in order to evaluate the effectiveness of AVP in resolving features of rhinosinusitis. Saccharin test was performed before and after completion of 40 days trial to assess the nasal ciliary activity, which has been proved to be directly related to the health of ciliary epithelium and nasal health overall as well. AVP may be regarded as a catalyst to conspicuously enhance ventilation and oxygenation of the paranasal sinuses and the positively effect the nasal respiratory epithelium by increasing better surface availability of oxygen and negative pressure in the nasal cavity itself.

  9. Endoscopic evaluation of therapeutic effects of “Anuloma-Viloma Pranayama” in Pratishyaya w.s.r. to mucociliary clearance mechanism and Bernoulli's principle

    PubMed Central

    Bhardwaj, Atul; Sharma, Mahendra Kumar; Gupta, Manoj

    2013-01-01

    The current endeavor intended to evaluate the effectiveness and mode of action of Anuloma-Viloma Pranayama (AVP), i.e., alternate nasal breathing exercise, in resolving clinical features of Pratishyaya, i.e., rhinosinusitis. The present study was directed to validate the use of classical “saccharin test” in measuring the nasal health by measuring mucociliary clearance time. This study also highlights the effects of AVP by application of Bernoulli principle in ventilation of paranasal sinuses and surface oxygenation of nasal and paranasal sinuses ciliary epithelium. Clinically, endoscopically and radiologically diagnosed patients of Pratishyaya, i.e., rhinosinusitis, satisfying the inclusion criteria were selected to perform AVP as a breathing exercise regularly for 30 min every day in order to evaluate the effectiveness of AVP in resolving features of rhinosinusitis. Saccharin test was performed before and after completion of 40 days trial to assess the nasal ciliary activity, which has been proved to be directly related to the health of ciliary epithelium and nasal health overall as well. AVP may be regarded as a catalyst to conspicuously enhance ventilation and oxygenation of the paranasal sinuses and the positively effect the nasal respiratory epithelium by increasing better surface availability of oxygen and negative pressure in the nasal cavity itself. PMID:24696572

  10. Multiple Scale Analysis of the Dynamic State Index (DSI)

    NASA Astrophysics Data System (ADS)

    Müller, A.; Névir, P.

    2016-12-01

    The Dynamic State Index (DSI) is a novel parameter that indicates local deviations of the atmospheric flow field from a stationary, inviscid and adiabatic solution of the primitive equations of fluid mechanics. This is in contrast to classical methods, which often diagnose deviations from temporal or spatial mean states. We show some applications of the DSI to atmospheric flow phenomena on different scales. The DSI is derived from the Energy-Vorticity-Theory (EVT) which is based on two global conserved quantities, the total energy and Ertel's potential enstrophy. Locally, these global quantities lead to the Bernoulli function and the PV building together with the potential temperature the DSI.If the Bernoulli function and the PV are balanced, the DSI vanishes and the basic state is obtained. Deviations from the basic state provide an indication of diabatic and non-stationary weather events. Therefore, the DSI offers a tool to diagnose and even prognose different atmospheric events on different scales.On synoptic scale, the DSI can help to diagnose storms and hurricanes, where also the dipole structure of the DSI plays an important role. In the scope of the collaborative research center "Scaling Cascades in Complex Systems" we show high correlations between the DSI and precipitation on convective scale. Moreover, we compare the results with reduced models and different spatial resolutions.

  11. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  12. Community health assessment using self-organizing maps and geographic information systems

    PubMed Central

    Basara, Heather G; Yuan, May

    2008-01-01

    Background From a public health perspective, a healthier community environment correlates with fewer occurrences of chronic or infectious diseases. Our premise is that community health is a non-linear function of environmental and socioeconomic effects that are not normally distributed among communities. The objective was to integrate multivariate data sets representing social, economic, and physical environmental factors to evaluate the hypothesis that communities with similar environmental characteristics exhibit similar distributions of disease. Results The SOM algorithm used the intrinsic distributions of 92 environmental variables to classify 511 communities into five clusters. SOM determined clusters were reprojected to geographic space and compared with the distributions of several health outcomes. ANOVA results indicated that the variability between community clusters was significant with respect to the spatial distribution of disease occurrence. Conclusion Our study demonstrated a positive relationship between environmental conditions and health outcomes in communities using the SOM-GIS method to overcome data and methodological challenges traditionally encountered in public health research. Results demonstrated that community health can be classified using environmental variables and that the SOM-GIS method may be applied to multivariate environmental health studies. PMID:19116020

  13. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    NASA Astrophysics Data System (ADS)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  14. Central Body Fat Distribution Associates with Unfavorable Renal Hemodynamics Independent of Body Mass Index

    PubMed Central

    Zelle, Dorien M.; Bakker, Stephan J.L.; Navis, Gerjan

    2013-01-01

    Central distribution of body fat is associated with a higher risk of renal disease, but whether it is the distribution pattern or the overall excess weight that underlies this association is not well understood. Here, we studied the association between waist-to-hip ratio (WHR), which reflects central adiposity, and renal hemodynamics in 315 healthy persons with a mean body mass index (BMI) of 24.9 kg/m2 and a mean 125I-iothalamate GFR of 109 ml/min per 1.73 m2. In multivariate analyses, WHR was associated with lower GFR, lower effective renal plasma flow, and higher filtration fraction, even after adjustment for sex, age, mean arterial pressure, and BMI. Multivariate models produced similar results regardless of whether the hemodynamic measures were indexed to body surface area. Thus, these results suggest that central body fat distribution, independent of BMI, is associated with an unfavorable pattern of renal hemodynamic measures that could underlie the increased renal risk reported in observational studies. PMID:23578944

  15. Noncentral Chi-Square versus Normal Distributions in Describing the Likelihood Ratio Statistic: The Univariate Case and Its Multivariate Implication

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai

    2008-01-01

    In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…

  16. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  17. A Note on the Application of the Extended Bernoulli Equation

    DTIC Science & Technology

    1999-02-01

    as OV s ... - Vp „ _ = -±L L + VO , (2) Dt p where DIDt denotes the material derivative (discussed in following section); V is the vector...force potential; V is the vector gradient operator; s (J is the deviatoric-stress tensor arising from any type of elasto-viscoplastic constitutive...behavior; and s ^j is index notation for dsy/dxp denoting the following vector condensation of the deviatoric-stress tensor: ds ds ds

  18. A new experimental method for determining local airloads on rotor blades in forward flight

    NASA Astrophysics Data System (ADS)

    Berton, E.; Maresca, C.; Favier, D.

    This paper presents a new approach for determining local airloads on helicopter rotor blade sections in forward flight. The method is based on the momentum equation in which all the terms are expressed by means of the velocity field measured by a laser Doppler velocimeter. The relative magnitude of the different terms involved in the momentum and Bernoulli equations is estimated and the results are encouraging.

  19. A conserved quantity in thin body dynamics

    NASA Astrophysics Data System (ADS)

    Hanna, J. A.; Pendar, H.

    2016-02-01

    Thin, solid bodies with metric symmetries admit a restricted form of reparameterization invariance. Their dynamical equilibria include motions with both rigid and flowing aspects. On such configurations, a quantity is conserved along the intrinsic coordinate corresponding to the symmetry. As an example of its utility, this conserved quantity is combined with linear and angular momentum currents to construct solutions for the equilibria of a rotating, flowing string, for which it is akin to Bernoulli's constant.

  20. Dynamic response of a viscoelastic Timoshenko beam

    NASA Technical Reports Server (NTRS)

    Kalyanasundaram, S.; Allen, D. H.; Schapery, R. A.

    1987-01-01

    The analysis presented in this study deals with the vibratory response of viscoelastic Timoshenko (1955) beams under the assumption of small material loss tangents. The appropriate method of analysis employed here may be applied to more complex structures. This study compares the damping ratios obtained from the Timoshenko and Euler-Bernoulli theories for a given viscoelastic material system. From this study the effect of shear deformation and rotary inertia on damping ratios can be identified.

  1. Accumulation risk assessment for the flooding hazard

    NASA Astrophysics Data System (ADS)

    Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto

    2010-05-01

    One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.

  2. Closed-form solution for static pull-in voltage of electrostatically actuated clamped-clamped micro/nano beams under the effect of fringing field and van der Waals force

    NASA Astrophysics Data System (ADS)

    Bhojawala, V. M.; Vakharia, D. P.

    2017-12-01

    This investigation provides an accurate prediction of static pull-in voltage for clamped-clamped micro/nano beams based on distributed model. The Euler-Bernoulli beam theory is used adapting geometric non-linearity of beam, internal (residual) stress, van der Waals force, distributed electrostatic force and fringing field effects for deriving governing differential equation. The Galerkin discretisation method is used to make reduced-order model of the governing differential equation. A regime plot is presented in the current work for determining the number of modes required in reduced-order model to obtain completely converged pull-in voltage for micro/nano beams. A closed-form relation is developed based on the relationship obtained from curve fitting of pull-in instability plots and subsequent non-linear regression for the proposed relation. The output of regression analysis provides Chi-square (χ 2) tolerance value equals to 1  ×  10-9, adjusted R-square value equals to 0.999 29 and P-value equals to zero, these statistical parameters indicate the convergence of non-linear fit, accuracy of fitted data and significance of the proposed model respectively. The closed-form equation is validated using available data of experimental and numerical results. The relative maximum error of 4.08% in comparison to several available experimental and numerical data proves the reliability of the proposed closed-form equation.

  3. Structure and properties of ZnSxSe1-x thin films deposited by thermal evaporation of ZnS and ZnSe powder mixtures

    NASA Astrophysics Data System (ADS)

    Valeev, R. G.; Romanov, E. A.; Vorobiev, V. L.; Mukhgalin, V. V.; Kriventsov, V. V.; Chukavin, A. I.; Robouch, B. V.

    2015-02-01

    Interest to ZnSxSe1-x alloys is due to their band-gap tunability varying S and Se content. Films of ZnSxSe1-x were grown evaporating ZnS and ZnSe powder mixtures onto SiO2, NaCl, Si and ITO substrates using an original low-cost method. X-ray diffraction patterns and Raman spectroscopy, show that the lattice structure of these films is cubic ZnSe-like, as S atoms replace Se and film compositions have their initial S/Se ratio. Optical absorption spectra show that band gap values increase from 2.25 to 3 eV as x increases, in agreement with the literature. Because S atomic radii are smaller than Se, EXAFS spectra confirm that bond distances and Se coordination numbers decrease as the Se content decreases. The strong deviation from linearity of ZnSe coordination numbers in the ZnSxSe1-x indicate that within this ordered crystal structure strong site occupation preferences occur in the distribution of Se and S ions. The behavior is quantitatively confirmed by the strong deviation from the random Bernoulli distribution of the three sight occupation preference coefficients of the strained tetrahedron model. Actually, the ternary ZnSxSe1-x system is a bi-binary (ZnS+ZnSe) alloy with evanescent formation of ternary configurations throughout the x-range.

  4. Vibration analysis of the maglev guideway with the moving load

    NASA Astrophysics Data System (ADS)

    Wang, H. P.; Li, J.; Zhang, K.

    2007-09-01

    The response of the guideway induced by moving maglev vehicle is investigated in this paper. The maglev vehicle is simplified as evenly distributed force acting on the guideway at constant speed. According to the experimental line, the guideway structure of rail-sleeper-bridge is simplified as Bernoulli-Euler (B-E) beam—evenly distributed spring—simply supported B-E beam structure; thus, double deck model of the maglev guideway is constructed which can more accurately reflect the dynamic characteristic of the experimental line. The natural frequency and mode are deduced based on the theoretical model. The relationship between structural parameters and natural frequency are exploited by employing the numerical calculation method. The way to suppress the vehicle-guideway interaction by regulating the structural parameter is also discussed here. Using the normal coordinate transformation method, the coupled differential equations of motion of the maglev guideway are converted into a set of uncoupled equations. The closed-form solutions for the response of the guideway subjecting the moving load are derived. It is noted that the moving load would not induce the vehicle-guideway interaction oscillation. The analysis of the guideway impact factor implies that at some position of the guideway, the deflection may decrease with the increase of the speed of the load; several extreme value of the guideway displacement will appear induced by different speeds, with different acting place, the speeds are different either. The final numerical simulation verifies these conclusions.

  5. A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores

    PubMed Central

    Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn

    2013-01-01

    Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059

  6. Robust reliable sampled-data control for switched systems with application to flight control

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.

    2016-11-01

    This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.

  7. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  8. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  9. Divergence instability of pipes conveying fluid with uncertain flow velocity

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh

    2018-02-01

    This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.

  10. Rényi entropy of the totally asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Wood, Anthony J.; Blythe, Richard A.; Evans, Martin R.

    2017-11-01

    The Rényi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Rényi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Rényi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Rényi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Rényi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases.

  11. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  12. Numerical Modeling of Cavitating Venturi: A Flow Control Element of Propulsion System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Saxon, Jeff (Technical Monitor)

    2002-01-01

    In a propulsion system, the propellant flow and mixture ratio could be controlled either by variable area flow control valves or by passive flow control elements such as cavitating venturies. Cavitating venturies maintain constant propellant flowrate for fixed inlet conditions (pressure and temperature) and wide range of outlet pressures, thereby maintain constant, engine thrust and mixture ratio. The flowrate through the venturi reaches a constant value and becomes independent of outlet pressure when the pressure at throat becomes equal to vapor pressure. In order to develop a numerical model of propulsion system, it is necessary to model cavitating venturies in propellant feed systems. This paper presents a finite volume model of flow network of a cavitating venturi. The venturi was discretized into a number of control volumes and mass, momentum and energy conservation equations in each control volume are simultaneously solved to calculate one-dimensional pressure, density, and flowrate and temperature distribution. The numerical model predicts cavitations at the throat when outlet pressure was gradually reduced. Once cavitation starts, with further reduction of downstream pressure, no change in flowrate is found. The numerical predictions have been compared with test data and empirical equation based on Bernoulli's equation.

  13. Some Thermodynamic Considerations on the Physical and Quantum Nature of Space and Time

    NASA Technical Reports Server (NTRS)

    Sohrab, Siavash H.; Piltch, Nancy (Technical Monitor)

    2000-01-01

    It is suggested that the Planck h = m(sub k)c Lambda(sub k) and the Boltzmann k = m(sub k)c nu(sub k)Constants have stochastic foundation. It is further suggested that a body of fluid at equilibrium is composed of a spectrum of molecular clusters (energy levels) the size of which are governed by the Maxwell-Boltzmann distribution function. Brownian motions are attributed to equilibrium between suspensions and molecular clusters. Atomic (molecular) transition between different size atomic- (molecular-) clusters (energy levels) is shown to result in emission/absorption of energy in accordance with Bohr's theory of atomic spectra. Physical space is identified as a tachyonic fluid that is Dirac's stochastic ether or de Broglie's hidden thermostat. Compressibility of physical space, in accordance with Planck's compressible ether, is shown to result in the Lorentz-Fitzgerald contraction, thus providing a causal explanation of relativistic effect in accordance with the perceptions of Poincare and Lorentz. The invariant Schrodinger equation is derived from the invariant Bernoulli equation for incompressible potential flow. Following Heisenberg a temporal uncertainty relation is introduced as Delta(nu(sub Beta)) Delta(Rho(sub Beta)) > = k.

  14. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  15. BLURRING OF BIOGEOGRAPHIC BOUNDARIES: A MULTIVARIATE ANALYSIS OF THE REGIONAL PATTERNS OF NATIVE AND NONINDIGENOUS SPECIES ASSEMBLAGES IN PACIFIC COAST ESTUARIES

    EPA Science Inventory

    Many, if not most, invaders have wide physiological tolerance limits and generalist habitat requirements. Consequently as a group nonindigenous species should have wider geographic distributions compared to native fauna. In turn, these broader distributions of nonindigenous speci...

  16. A climate-based multivariate extreme emulator of met-ocean-hydrological events for coastal flooding

    NASA Astrophysics Data System (ADS)

    Camus, Paula; Rueda, Ana; Mendez, Fernando J.; Tomas, Antonio; Del Jesus, Manuel; Losada, Iñigo J.

    2015-04-01

    Atmosphere-ocean general circulation models (AOGCMs) are useful to analyze large-scale climate variability (long-term historical periods, future climate projections). However, applications such as coastal flood modeling require climate information at finer scale. Besides, flooding events depend on multiple climate conditions: waves, surge levels from the open-ocean and river discharge caused by precipitation. Therefore, a multivariate statistical downscaling approach is adopted to reproduce relationships between variables and due to its low computational cost. The proposed method can be considered as a hybrid approach which combines a probabilistic weather type downscaling model with a stochastic weather generator component. Predictand distributions are reproduced modeling the relationship with AOGCM predictors based on a physical division in weather types (Camus et al., 2012). The multivariate dependence structure of the predictand (extreme events) is introduced linking the independent marginal distributions of the variables by a probabilistic copula regression (Ben Ayala et al., 2014). This hybrid approach is applied for the downscaling of AOGCM data to daily precipitation and maximum significant wave height and storm-surge in different locations along the Spanish coast. Reanalysis data is used to assess the proposed method. A commonly predictor for the three variables involved is classified using a regression-guided clustering algorithm. The most appropriate statistical model (general extreme value distribution, pareto distribution) for daily conditions is fitted. Stochastic simulation of the present climate is performed obtaining the set of hydraulic boundary conditions needed for high resolution coastal flood modeling. References: Camus, P., Menéndez, M., Méndez, F.J., Izaguirre, C., Espejo, A., Cánovas, V., Pérez, J., Rueda, A., Losada, I.J., Medina, R. (2014b). A weather-type statistical downscaling framework for ocean wave climate. Journal of Geophysical Research, doi: 10.1002/2014JC010141. Ben Ayala, M.A., Chebana, F., Ouarda, T.B.M.J. (2014). Probabilistic Gaussian Copula Regression Model for Multisite and Multivariable Downscaling, Journal of Climate, 27, 3331-3347.

  17. Exploring image data assimilation in the prospect of high-resolution satellite oceanic observations

    NASA Astrophysics Data System (ADS)

    Durán Moro, Marina; Brankart, Jean-Michel; Brasseur, Pierre; Verron, Jacques

    2017-07-01

    Satellite sensors increasingly provide high-resolution (HR) observations of the ocean. They supply observations of sea surface height (SSH) and of tracers of the dynamics such as sea surface salinity (SSS) and sea surface temperature (SST). In particular, the Surface Water Ocean Topography (SWOT) mission will provide measurements of the surface ocean topography at very high-resolution (HR) delivering unprecedented information on the meso-scale and submeso-scale dynamics. This study investigates the feasibility to use these measurements to reconstruct meso-scale features simulated by numerical models, in particular on the vertical dimension. A methodology to reconstruct three-dimensional (3D) multivariate meso-scale scenes is developed by using a HR numerical model of the Solomon Sea region. An inverse problem is defined in the framework of a twin experiment where synthetic observations are used. A true state is chosen among the 3D multivariate states which is considered as a reference state. In order to correct a first guess of this true state, a two-step analysis is carried out. A probability distribution of the first guess is defined and updated at each step of the analysis: (i) the first step applies the analysis scheme of a reduced-order Kalman filter to update the first guess probability distribution using SSH observation; (ii) the second step minimizes a cost function using observations of HR image structure and a new probability distribution is estimated. The analysis is extended to the vertical dimension using 3D multivariate empirical orthogonal functions (EOFs) and the probabilistic approach allows the update of the probability distribution through the two-step analysis. Experiments show that the proposed technique succeeds in correcting a multivariate state using meso-scale and submeso-scale information contained in HR SSH and image structure observations. It also demonstrates how the surface information can be used to reconstruct the ocean state below the surface.

  18. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    PubMed

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  19. Estimation and model selection of semiparametric multivariate survival functions under general censorship

    PubMed Central

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2013-01-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286

  20. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  1. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  2. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  3. Rasch Model Analysis with the BICAL Computer Program

    DTIC Science & Technology

    1976-09-01

    and persons which lead to measures that persist from trial to trial . The measurement model is essential in this process because it provides a framework...and his students. Section 15 two derives the estimating equations for the Bernoulli (i.e. one trial per task) form : " and then generalizes to the...Binomial form (several trials per task). Finall) goodness of fit tests are presented for assessing the adequacy of the calibration. t { ) I I I 41 CHAPTER

  4. Couple stress theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for plane curved rods based on linear couple stress theory of elasticity have been developed.2-D theory is developed from general 2-D equations of linear couple stress elasticity using a special curvilinear system of coordinates related to the middle line of the rod as well as special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and rotation along with body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby, all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of elasticity, a system of differential equations in terms of displacements and boundary conditions for Fourier coefficients have been obtained. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear couple stress theory of elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scales when taking into account couple stress and rotation effects.

  5. State dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations and threshold

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.

    2017-11-01

    The objective of this paper is to analyse state dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations, threshold and constant retrial policy. Primary customers are arriving into the system in bulk with different arrival rates λ a and λ b . If arriving customers find the server is busy then the entire batch will join to orbit. Customer from orbit request service one by one with constant retrial rate γ. On the other hand if an arrival of customers finds the server is idle then customers will be served in batches according to general bulk service rule. After service completion, customers may request service again with probability δ as feedback or leave from the system with probability 1 - δ. In the service completion epoch, if the orbit size is zero then the server leaves for multiple vacations. The server continues the vacation until the orbit size reaches the value ‘N’ (N > b). At the vacation completion, if the orbit size is ‘N’ then the server becomes ready to provide service for customers from the main pool or from the orbit. For the designed queueing model, probability generating function of the queue size at an arbitrary time will be obtained by using supplementary variable technique. Various performance measures will be derived with suitable numerical illustrations.

  6. Bending of Euler-Bernoulli nanobeams based on the strain-driven and stress-driven nonlocal integral models: a numerical approach

    NASA Astrophysics Data System (ADS)

    Oskouie, M. Faraji; Ansari, R.; Rouhi, H.

    2018-04-01

    Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.

  7. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. NEWTONP - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.

  9. Application of Maxent Multivariate Analysis to Define Climate-Change Effects on Species Distributions and Changes

    DTIC Science & Technology

    2014-09-01

    approaches. Ecological Modelling Volume 200, Issues 1–2, 10, pp 1–19. Buhlmann, Kurt A ., Thomas S.B. Akre , John B. Iverson, Deno Karapatakis, Russell A ...statistical multivariate analysis to define the current and projected future range probability for species of interest to Army land managers. A software...15 Figure 4. RCW omission rate and predicted area as a function of the cumulative threshold

  10. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  11. A Note on Asymptotic Joint Distribution of the Eigenvalues of a Noncentral Multivariate F Matrix.

    DTIC Science & Technology

    1984-11-01

    Krishnaiah (1982). Now, let us consider the samples drawn from the k multivariate normal popuiejons. Let (Xlt....Xpt) denote the mean vector of the t...to maltivariate problems. Sankh-ya, 4, 381-39(s. (71 KRISHNAIAH , P. R. (1982). Selection of variables in discrimlnant analysis. In Handbook of...Statistics, Volume 2 (P. R. Krishnaiah , editor), 805-820. North-Holland Publishing Company. 6. Unclassifie INSTRUCTIONS REPORT DOCUMENTATION PAGE

  12. Comparative forensic soil analysis of New Jersey state parks using a combination of simple techniques with multivariate statistics.

    PubMed

    Bonetti, Jennifer; Quarino, Lawrence

    2014-05-01

    This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.

  13. Impacts of rising health care costs on families with employment-based private insurance: a national analysis with state fixed effects.

    PubMed

    Yu, Hao; Dick, Andrew W

    2012-10-01

    Given the rapid growth of health care costs, some experts were concerned with erosion of employment-based private insurance (EBPI). This empirical analysis aims to quantify the concern. Using the National Health Account, we generated a cost index to represent state-level annual cost growth. We merged it with the 1996-2003 Medical Expenditure Panel Survey. The unit of analysis is the family. We conducted both bivariate and multivariate logistic analyses. The bivariate analysis found a significant inverse association between the cost index and the proportion of families receiving an offer of EBPI. The multivariate analysis showed that the cost index was significantly negatively associated with the likelihood of receiving an EBPI offer for the entire sample and for families in the first, second, and third quartiles of income distribution. The cost index was also significantly negatively associated with the proportion of families with EBPI for the entire year for each family member (EBPI-EYEM). The multivariate analysis confirmed significance of the relationship for the entire sample, and for families in the second and third quartiles of income distribution. Among the families with EBPI-EYEM, there was a positive relationship between the cost index and this group's likelihood of having out-of-pocket expenditures exceeding 10 percent of family income. The multivariate analysis confirmed significance of the relationship for the entire group and for families in the second and third quartiles of income distribution. Rising health costs reduce EBPI availability and enrollment, and the financial protection provided by it, especially for middle-class families. © Health Research and Educational Trust.

  14. Impacts of Rising Health Care Costs on Families with Employment-Based Private Insurance: A National Analysis with State Fixed Effects

    PubMed Central

    Yu, Hao; Dick, Andrew W

    2012-01-01

    Background Given the rapid growth of health care costs, some experts were concerned with erosion of employment-based private insurance (EBPI). This empirical analysis aims to quantify the concern. Methods Using the National Health Account, we generated a cost index to represent state-level annual cost growth. We merged it with the 1996–2003 Medical Expenditure Panel Survey. The unit of analysis is the family. We conducted both bivariate and multivariate logistic analyses. Results The bivariate analysis found a significant inverse association between the cost index and the proportion of families receiving an offer of EBPI. The multivariate analysis showed that the cost index was significantly negatively associated with the likelihood of receiving an EBPI offer for the entire sample and for families in the first, second, and third quartiles of income distribution. The cost index was also significantly negatively associated with the proportion of families with EBPI for the entire year for each family member (EBPI-EYEM). The multivariate analysis confirmed significance of the relationship for the entire sample, and for families in the second and third quartiles of income distribution. Among the families with EBPI-EYEM, there was a positive relationship between the cost index and this group's likelihood of having out-of-pocket expenditures exceeding 10 percent of family income. The multivariate analysis confirmed significance of the relationship for the entire group and for families in the second and third quartiles of income distribution. Conclusions Rising health costs reduce EBPI availability and enrollment, and the financial protection provided by it, especially for middle-class families. PMID:22417314

  15. Evaluating online data of water quality changes in a pilot drinking water distribution system with multivariate data exploration methods.

    PubMed

    Mustonen, Satu M; Tissari, Soile; Huikko, Laura; Kolehmainen, Mikko; Lehtola, Markku J; Hirvonen, Arja

    2008-05-01

    The distribution of drinking water generates soft deposits and biofilms in the pipelines of distribution systems. Disturbances in water distribution can detach these deposits and biofilms and thus deteriorate the water quality. We studied the effects of simulated pressure shocks on the water quality with online analysers. The study was conducted with copper and composite plastic pipelines in a pilot distribution system. The online data gathered during the study was evaluated with Self-Organising Map (SOM) and Sammon's mapping, which are useful methods in exploring large amounts of multivariate data. The objective was to test the usefulness of these methods in pinpointing the abnormal water quality changes in the online data. The pressure shocks increased temporarily the number of particles, turbidity and electrical conductivity. SOM and Sammon's mapping were able to separate these situations from the normal data and thus make those visible. Therefore these methods make it possible to detect abrupt changes in water quality and thus to react rapidly to any disturbances in the system. These methods are useful in developing alert systems and predictive applications connected to online monitoring.

  16. Modeling Multi-Variate Gaussian Distributions and Analysis of Higgs Boson Couplings with the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Krohn, Olivia; Armbruster, Aaron; Gao, Yongsheng; Atlas Collaboration

    2017-01-01

    Software tools developed for the purpose of modeling CERN LHC pp collision data to aid in its interpretation are presented. Some measurements are not adequately described by a Gaussian distribution; thus an interpretation assuming Gaussian uncertainties will inevitably introduce bias, necessitating analytical tools to recreate and evaluate non-Gaussian features. One example is the measurements of Higgs boson production rates in different decay channels, and the interpretation of these measurements. The ratios of data to Standard Model expectations (μ) for five arbitrary signals were modeled by building five Poisson distributions with mixed signal contributions such that the measured values of μ are correlated. Algorithms were designed to recreate probability distribution functions of μ as multi-variate Gaussians, where the standard deviation (σ) and correlation coefficients (ρ) are parametrized. There was good success with modeling 1-D likelihood contours of μ, and the multi-dimensional distributions were well modeled within 1- σ but the model began to diverge after 2- σ due to unmerited assumptions in developing ρ. Future plans to improve the algorithms and develop a user-friendly analysis package will also be discussed. NSF International Research Experiences for Students

  17. Atrial Electrogram Fractionation Distribution before and after Pulmonary Vein Isolation in Human Persistent Atrial Fibrillation-A Retrospective Multivariate Statistical Analysis.

    PubMed

    Almeida, Tiago P; Chu, Gavin S; Li, Xin; Dastagir, Nawshin; Tuan, Jiun H; Stafford, Peter J; Schlindwein, Fernando S; Ng, G André

    2017-01-01

    Purpose: Complex fractionated atrial electrograms (CFAE)-guided ablation after pulmonary vein isolation (PVI) has been used for persistent atrial fibrillation (persAF) therapy. This strategy has shown suboptimal outcomes due to, among other factors, undetected changes in the atrial tissue following PVI. In the present work, we investigate CFAE distribution before and after PVI in patients with persAF using a multivariate statistical model. Methods: 207 pairs of atrial electrograms (AEGs) were collected before and after PVI respectively, from corresponding LA regions in 18 persAF patients. Twelve attributes were measured from the AEGs, before and after PVI. Statistical models based on multivariate analysis of variance (MANOVA) and linear discriminant analysis (LDA) have been used to characterize the atrial regions and AEGs. Results: PVI significantly reduced CFAEs in the LA (70 vs. 40%; P < 0.0001). Four types of LA regions were identified, based on the AEGs characteristics: (i) fractionated before PVI that remained fractionated after PVI (31% of the collected points); (ii) fractionated that converted to normal (39%); (iii) normal prior to PVI that became fractionated (9%) and; (iv) normal that remained normal (21%). Individually, the attributes failed to distinguish these LA regions, but multivariate statistical models were effective in their discrimination ( P < 0.0001). Conclusion: Our results have unveiled that there are LA regions resistant to PVI, while others are affected by it. Although, traditional methods were unable to identify these different regions, the proposed multivariate statistical model discriminated LA regions resistant to PVI from those affected by it without prior ablation information.

  18. Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2017-01-01

    Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…

  19. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  20. Multivariate non-normally distributed random variables in climate research - introduction to the copula approach

    NASA Astrophysics Data System (ADS)

    Schölzel, C.; Friederichs, P.

    2008-10-01

    Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF) tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.

  1. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  2. Empirical performance of the multivariate normal universal portfolio

    NASA Astrophysics Data System (ADS)

    Tan, Choon Peng; Pang, Sook Theng

    2013-09-01

    Universal portfolios generated by the multivariate normal distribution are studied with emphasis on the case where variables are dependent, namely, the covariance matrix is not diagonal. The moving-order multivariate normal universal portfolio requires very long implementation time and large computer memory in its implementation. With the objective of reducing memory and implementation time, the finite-order universal portfolio is introduced. Some stock-price data sets are selected from the local stock exchange and the finite-order universal portfolio is run on the data sets, for small finite order. Empirically, it is shown that the portfolio can outperform the moving-order Dirichlet universal portfolio of Cover and Ordentlich[2] for certain parameters in the selected data sets.

  3. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  4. Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.

    DTIC Science & Technology

    1982-12-20

    Intervals. For more details on these test procedures refer to Gabriel [7J, Krishnaiah (CIlUj, [11]), Srivastava [16), and others. -3- As noted in Consul...723. (4] Consul, P. C. (1969), "The Exact Distributions of Likelihood Criteria for Different Hypotheses," in P. R. Krishnaiah (Ed.), Multivariate...1178. [7] Gabriel, K. R. (1969), "A Comparison of Some lethods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), Multivariate Analysis-lI

  5. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  6. Radial profiles of velocity and pressure for condensation-induced hurricanes

    NASA Astrophysics Data System (ADS)

    Makarieva, A. M.; Gorshkov, V. G.

    2011-02-01

    The Bernoulli integral in the form of an algebraic equation is obtained for the hurricane air flow as the sum of the kinetic energy of wind and the condensational potential energy. With an account for the eye rotation energy and the decrease of angular momentum towards the hurricane center it is shown that the theoretical profiles of pressure and velocity agree well with observations for intense hurricanes. The previous order of magnitude estimates obtained in pole approximation are confirmed.

  7. Human-Swarm Interactions Based on Managing Attractors

    DTIC Science & Technology

    2014-03-01

    means that agent j is visible to agent i at time t. Each aij(t) is determined at time t according to a Bernoulli random vari- able with parameter pij(t...angu- lar momentum , mgroup, and group polarization, pgroup [9, 17]. The mgroup is a measure of the degree of rotation of the group about its centroid...0.1 seconds. 91 (a) (b) Figure 2: The group momentum and polarization as the radius of orientation is increased and decreased. 3. ATTRACTORS AND

  8. Theoretical Limits of Damping Attainable by Smart Beams with Rate Feedback

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1997-01-01

    Using a generally accepted model we present a comprehensive analysis (within the page limitation) of an Euler- Bernoulli beam with PZT sensor-actuator and pure rate feedback. The emphasis is on the root locus - the dependence of the attainable damping on the feedback gain. There is a critical value of the gain beyond which the damping decreases to zero. We construct the time-domain response using semigroup theory, and show that the eigenfunctions form a Riesz basis, leading to a 'modal' expansion.

  9. A Nonlinear Finite Element Framework for Viscoelastic Beams Based on the High-Order Reddy Beam Theory

    DTIC Science & Technology

    2012-06-09

    employed theories are the Euler-Bernoulli beam theory (EBT) and the Timoshenko beam theory ( TBT ). The major deficiency associated with the EBT is failure to...account for defor- mations associated with shearing. The TBT relaxes the normality assumption of the EBT and admits a constant state of shear strain...on a given cross-section. As a result, the TBT necessitates the use of shear correction coefficients in order to accurately predict transverse

  10. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  11. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  12. [Multivariate geostatistics and GIS-based approach to study the spatial distribution and sources of heavy metals in agricultural soil in the Pearl River Delta, China].

    PubMed

    Cai, Li-mei; Ma, Jin; Zhou, Yong-zhang; Huang, Lan-chun; Dou, Lei; Zhang, Cheng-bo; Fu, Shan-ming

    2008-12-01

    One hundred and eighteen surface soil samples were collected from the Dongguan City, and analyzed for concentration of Cu, Zn, Ni, Cr, Pb, Cd, As, Hg, pH and OM. The spatial distribution and sources of soil heavy metals were studied using multivariate geostatistical methods and GIS technique. The results indicated concentrations of Cu, Zn, Ni, Pb, Cd and Hg were beyond the soil background content in Guangdong province, and especially concentrations of Pb, Cd and Hg were greatly beyond the content. The results of factor analysis group Cu, Zn, Ni, Cr and As in Factor 1, Pb and Hg in Factor 2 and Cd in Factor 3. The spatial maps based on geostatistical analysis show definite association of Factor 1 with the soil parent material, Factor 2 was mainly affected by industries. The spatial distribution of Factor 3 was attributed to anthropogenic influence.

  13. Shape model of the maxillary dental arch using Fourier descriptors with an application in the rehabilitation for edentulous patient.

    PubMed

    Rijal, Omar M; Abdullah, Norli A; Isa, Zakiah M; Noor, Norliza M; Tawfiq, Omar F

    2013-01-01

    The knowledge of teeth positions on the maxillary arch is useful in the rehabilitation of the edentulous patient. A combination of angular (θ), and linear (l) variables representing position of four teeth were initially proposed as the shape descriptor of the maxillary dental arch. Three categories of shape were established, each having a multivariate normal distribution. It may be argued that 4 selected teeth on the standardized digital images of the dental casts could be considered as insufficient with respect to representing shape. However, increasing the number of points would create problems with dimensions and proof of existence of the multivariate normal distribution is extremely difficult. This study investigates the ability of Fourier descriptors (FD) using all maxillary teeth to find alternative shape models. Eight FD terms were sufficient to represent 21 points on the arch. Using these 8 FD terms as an alternative shape descriptor, three categories of shape were verified, each category having the complex normal distribution.

  14. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  15. Generating an Empirical Probability Distribution for the Andrews-Pregibon Statistic.

    ERIC Educational Resources Information Center

    Jarrell, Michele G.

    A probability distribution was developed for the Andrews-Pregibon (AP) statistic. The statistic, developed by D. F. Andrews and D. Pregibon (1978), identifies multivariate outliers. It is a ratio of the determinant of the data matrix with an observation deleted to the determinant of the entire data matrix. Although the AP statistic has been used…

  16. Alternatives for using multivariate regression to adjust prospective payment rates

    PubMed Central

    Sheingold, Steven H.

    1990-01-01

    Multivariate regression analysis has been used in structuring three of the adjustments to Medicare's prospective payment rates. Because the indirect-teaching adjustment, the disproportionate-share adjustment, and the adjustment for large cities are responsible for distributing approximately $3 billion in payments each year, the specification of regression models for these adjustments is of critical importance. In this article, the application of regression for adjusting Medicare's prospective rates is discussed, and the implications that differing specifications could have for these adjustments are demonstrated. PMID:10113271

  17. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Application of a multivariate normal distribution methodology to the dissociation of doubly ionized molecules: The DMDS (CH3 -SS-CH3 ) case.

    PubMed

    Varas, Lautaro R; Pontes, F C; Santos, A C F; Coutinho, L H; de Souza, G G B

    2015-09-15

    The ion-ion-coincidence mass spectroscopy technique brings useful information about the fragmentation dynamics of doubly and multiply charged ionic species. We advocate the use of a matrix-parameter methodology in order to represent and interpret the entire ion-ion spectra associated with the ionic dissociation of doubly charged molecules. This method makes it possible, among other things, to infer fragmentation processes and to extract information about overlapped ion-ion coincidences. This important piece of information is difficult to obtain from other previously described methodologies. A Wiley-McLaren time-of-flight mass spectrometer was used to discriminate the positively charged fragment ions resulting from the sample ionization by a pulsed 800 eV electron beam. We exemplify the application of this methodology by analyzing the fragmentation and ionic dissociation of the dimethyl disulfide (DMDS) molecule as induced by fast electrons. The doubly charged dissociation was analyzed using the Multivariate Normal Distribution. The ion-ion spectrum of the DMDS molecule was obtained at an incident electron energy of 800 eV and was matrix represented using the Multivariate Distribution theory. The proposed methodology allows us to distinguish information among [CH n SH n ] + /[CH 3 ] + (n = 1-3) fragment ions in the ion-ion coincidence spectra using ion-ion coincidence data. Using the momenta balance methodology for the inferred parameters, a secondary decay mechanism is proposed for the [CHS] + ion formation. As an additional check on the methodology, previously published data on the SiF 4 molecule was re-analyzed with the present methodology and the results were shown to be statistically equivalent. The use of a Multivariate Normal Distribution allows for the representation of the whole ion-ion mass spectrum of doubly or multiply ionized molecules as a combination of parameters and the extraction of information among overlapped data. We have successfully applied this methodology to the analysis of the fragmentation of the DMDS molecule. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  20. Self-tuning multivariable pole placement control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1992-01-01

    This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.

  1. Inference for the Bivariate and Multivariate Hidden Truncated Pareto(type II) and Pareto(type IV) Distribution and Some Measures of Divergence Related to Incompatibility of Probability Distribution

    ERIC Educational Resources Information Center

    Ghosh, Indranil

    2011-01-01

    Consider a discrete bivariate random variable (X, Y) with possible values x[subscript 1], x[subscript 2],..., x[subscript I] for X and y[subscript 1], y[subscript 2],..., y[subscript J] for Y. Further suppose that the corresponding families of conditional distributions, for X given values of Y and of Y for given values of X are available. We…

  2. [Rank distributions in community ecology from the statistical viewpoint].

    PubMed

    Maksimov, V N

    2004-01-01

    Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.

  3. Faà di Bruno's formula and the distributions of random partitions in population genetics and physics.

    PubMed

    Hoppe, Fred M

    2008-06-01

    We show that the formula of Faà di Bruno for the derivative of a composite function gives, in special cases, the sampling distributions in population genetics that are due to Ewens and to Pitman. The composite function is the same in each case. Other sampling distributions also arise in this way, such as those arising from Dirichlet, multivariate hypergeometric, and multinomial models, special cases of which correspond to Bose-Einstein, Fermi-Dirac, and Maxwell-Boltzmann distributions in physics. Connections are made to compound sampling models.

  4. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  5. Nonlocal theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-09-01

    New models for plane curved rods based on linear nonlocal theory of elasticity have been developed. The 2-D theory is developed from general 2-D equations of linear nonlocal elasticity using a special curvilinear system of coordinates related to the middle line of the rod along with special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate. Thereby, all equations of elasticity including nonlocal constitutive relations have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of local elasticity, a system of differential equations in terms of displacements for Fourier coefficients has been obtained. First and second order approximations have been considered in detail. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear nonlocal theory of elasticity which are considered in a special curvilinear system of coordinates related to the middle line of the rod. The obtained equations can be used to calculate stress-strain and to model thin walled structures in micro- and nanoscales when taking into account size dependent and nonlocal effects.

  6. Parallel Planes Information Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian

    2015-12-26

    This software presents a user-provided multivariate dataset as an interactive three dimensional visualization so that the user can explore the correlation between variables in the observations and the distribution of observations among the variables.

  7. Fast-NPS-A Markov Chain Monte Carlo-based analysis tool to obtain structural information from single-molecule FRET measurements

    NASA Astrophysics Data System (ADS)

    Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens

    2017-10-01

    The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.

  8. Predicting Potential Changes in Suitable Habitat and Distribution by 2100 for Tree Species of the Eastern United States

    Treesearch

    Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz

    2005-01-01

    We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...

  9. Macro-Econophysics

    NASA Astrophysics Data System (ADS)

    Aoyama, Hideaki; Fujiwara, Yoshi; Ikeda, Yuichi; Iyetomi, Hiroshi; Souma, Wataru; Yoshikawa, Hiroshi

    2017-07-01

    Preface; Foreword, Acknowledgements, List of tables; List of figures, prologue, 1. Introduction: reconstructing macroeconomics; 2. Basic concepts in statistical physics and stochastic models; 3. Income and firm-size distributions; 4. Productivity distribution and related topics; 5. Multivariate time-series analysis; 6. Business cycles; 7. Price dynamics and inflation/deflation; 8. Complex network, community analysis, visualization; 9. Systemic risks; Appendix A: computer program for beginners; Epilogue; Bibliography; Index.

  10. Identification of unknown spatial load distributions in a vibrating Euler-Bernoulli beam from limited measured data

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar; Kawano, Alexandre

    2016-05-01

    Two types of inverse source problems of identifying asynchronously distributed spatial loads governed by the Euler-Bernoulli beam equation ρ (x){w}{tt}+μ (x){w}t+{({EI}(x){w}{xx})}{xx}-{T}r{u}{xx}={\\sum }m=1M{g}m(t){f}m(x), (x,t)\\in {{{Ω }}}T := (0,l)× (0,T), with hinged-clamped ends (w(0,t)={w}{xx}(0,t)=0,w(l,t) = {w}x(l,t)=0,t\\in (0,T)), are studied. Here {g}m(t) are linearly independent functions, describing an asynchronous temporal loading, and {f}m(x) are the spatial load distributions. In the first identification problem the values {ν }k(t),k=\\bar{1,K}, of the deflection w(x,t), are assumed to be known, as measured output data, in a neighbourhood of the finite set of points P:= \\{{x}k\\in (0,l),k=\\bar{1,K}\\}\\subset (0,l), corresponding to the internal points of a continuous beam, for all t\\in ]0,T[. In the second identification problem the values {θ }k(t),k=\\bar{1,K}, of the slope {w}x(x,t), are assumed to be known, as measured output data in a neighbourhood of the same set of points P for all t\\in ]0,T[. These inverse source problems will be defined subsequently as the problems ISP1 and ISP2. The general purpose of this study is to develop mathematical concepts and tools that are capable of providing effective numerical algorithms for the numerical solution of the considered class of inverse problems. Note that both measured output data {ν }k(t) and {θ }k(t) contain random noise. In the first part of the study we prove that each measured output data {ν }k(t) and {θ }k(t),k=\\bar{1,K} can uniquely determine the unknown functions {f}m\\in {H}-1(]0,l[),m=\\bar{1,M}. In the second part of the study we will introduce the input-output operators {{ K }}d :{L}2(0,T)\\mapsto {L}2(0,T),({{ K }}df)(t):= w(x,t;f),x\\in P, f(x) := ({f}1(x),\\ldots ,{f}M(x)), and {{ K }}s :{L}2(0,T)\\mapsto {L}2(0,T), ({{ K }}sf)(t):= {w}x(x,t;f), x\\in P , corresponding to the problems ISP1 and ISP2, and then reformulate these problems as the operator equations: {{ K }}df=ν and {{ K }}sf=θ , where ν (t):= ({ν }1(t),\\ldots ,{ν }K(t)) and {θ }k(t):= ({θ }1(t),\\ldots ,{θ }K(t)). Since both measured output data contain random noise, we use the most prominent regularisation method, Tikhonov regularisation, introducing the regularised cost functionals {J}1α (f):= (1/2)\\parallel {{ K }}df-ν {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2 and {J}2α (f):= (1/2)\\parallel {{ K }}sf-θ {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2. Using a priori estimates for the weak solution of the direct problem and the Tikhonov regularisation method combined with the adjoint problem approach, we prove that the Fréchet gradients {J}1\\prime (f) and {J}2\\prime (f) of both cost functionals can explicitly be derived via the corresponding weak solutions of adjoint problems and the known temporal loads {g}m(t). Moreover, we show that these gradients are Lipschitz continuous, which allows the use of gradient type iteration convergent algorithms. Two applications of the proposed theory are presented. It is shown that solvability results for inverse source problems related to the synchronous loading case, with a single interior measured data, are special cases of the obtained results for asynchronously distributed spatial load cases.

  11. An alternative derivation of the stationary distribution of the multivariate neutral Wright-Fisher model for low mutation rates with a view to mutation rate estimation from site frequency data.

    PubMed

    Schrempf, Dominik; Hobolth, Asger

    2017-04-01

    Recently, Burden and Tang (2016) provided an analytical expression for the stationary distribution of the multivariate neutral Wright-Fisher model with low mutation rates. In this paper we present a simple, alternative derivation that illustrates the approximation. Our proof is based on the discrete multivariate boundary mutation model which has three key ingredients. First, the decoupled Moran model is used to describe genetic drift. Second, low mutation rates are assumed by limiting mutations to monomorphic states. Third, the mutation rate matrix is separated into a time-reversible part and a flux part, as suggested by Burden and Tang (2016). An application of our result to data from several great apes reveals that the assumption of stationarity may be inadequate or that other evolutionary forces like selection or biased gene conversion are acting. Furthermore we find that the model with a reversible mutation rate matrix provides a reasonably good fit to the data compared to the one with a non-reversible mutation rate matrix. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  13. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    NASA Astrophysics Data System (ADS)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  14. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  15. Geospatial clustering in sugar-sweetened beverage consumption among Boston youth.

    PubMed

    Tamura, Kosuke; Duncan, Dustin T; Athens, Jessica K; Bragg, Marie A; Rienti, Michael; Aldstadt, Jared; Scott, Marc A; Elbel, Brian

    2017-09-01

    The objective was to detect geospatial clustering of sugar-sweetened beverage (SSB) intake in Boston adolescents (age = 16.3 ± 1.3 years [range: 13-19]; female = 56.1%; White = 10.4%, Black = 42.6%, Hispanics = 32.4%, and others = 14.6%) using spatial scan statistics. We used data on self-reported SSB intake from the 2008 Boston Youth Survey Geospatial Dataset (n = 1292). Two binary variables were created: consumption of SSB (never versus any) on (1) soda and (2) other sugary drinks (e.g., lemonade). A Bernoulli spatial scan statistic was used to identify geospatial clusters of soda and other sugary drinks in unadjusted models and models adjusted for age, gender, and race/ethnicity. There was no statistically significant clustering of soda consumption in the unadjusted model. In contrast, a cluster of non-soda SSB consumption emerged in the middle of Boston (relative risk = 1.20, p = .005), indicating that adolescents within the cluster had a 20% higher probability of reporting non-soda SSB intake than outside the cluster. The cluster was no longer significant in the adjusted model, suggesting spatial variation in non-soda SSB drink intake correlates with the geographic distribution of students by race/ethnicity, age, and gender.

  16. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  17. Toward Higher-Order Mass Detection: Influence of an Adsorbate's Rotational Inertia and Eccentricity on the Resonant Response of a Bernoulli-Euler Cantilever Beam.

    PubMed

    Heinrich, Stephen M; Dufour, Isabelle

    2015-11-19

    In this paper a new theoretical model is derived, the results of which permit a detailed examination of how the resonant characteristics of a cantilever are influenced by a particle (adsorbate) attached at an arbitrary position along the beam's length. Unlike most previous work, the particle need not be small in mass or dimension relative to the beam, and the adsorbate's geometric characteristics are incorporated into the model via its rotational inertia and eccentricity relative to the beam axis. For the special case in which the adsorbate's (translational) mass is indeed small, an analytical solution is obtained for the particle-induced resonant frequency shift of an arbitrary flexural mode, including the effects of rotational inertia and eccentricity. This solution is shown to possess the exact first-order behavior in the normalized particle mass and represents a generalization of analytical solutions derived by others in earlier studies. The results suggest the potential for "higher-order" nanobeam-based mass detection methods by which the multi-mode frequency response reflects not only the adsorbate's mass but also important geometric data related to its size, shape, or orientation (i.e., the mass distribution), thus resulting in more highly discriminatory techniques for discrete-mass sensing.

  18. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  19. Hyperaccretion during tidal disruption events: weakly bound debris envelopes and jets

    NASA Astrophysics Data System (ADS)

    Coughlin, Eric; Begelman, M. C.

    2014-01-01

    After the destruction of the star during a tidal disruption event (TDE), the cataclysmic encounter between a star and the supermassive black hole (SMBH) of a galaxy, approximately half of the original stellar debris falls back onto the hole at a rate that can initially exceed the Eddington limit by orders of magnitude. We argue that the angular momentum of this matter is too low to allow it to attain a disk-like configuration with accretion proceeding at a mildly super-Eddington rate, the excess energy being carried away by a combination of radiative losses and radially distributed winds. Instead, we propose that the in-falling gas traps accretion energy until it inflates into a weakly-bound, quasi-spherical structure with gas extending nearly to the poles. We study the structure and evolution of such “Zero-Bernoulli accretion” flows (ZEBRAs) as a model for the super- Eddington phase of TDEs. We argue that such flows cannot stop extremely super-Eddington accretion from occurring, and that once the envelope is maximally inflated, any excess accretion energy escapes through the poles in the form of powerful jets. Similar models, including self-gravity, could be applicable to gamma-ray bursts from collapsars and the growth of supermassive black hole seeds inside quasi-stars.

  20. Nonlinear finite amplitude vibrations of sharp-edged beams in viscous fluids

    NASA Astrophysics Data System (ADS)

    Aureli, M.; Basaran, M. E.; Porfiri, M.

    2012-03-01

    In this paper, we study flexural vibrations of a cantilever beam with thin rectangular cross section submerged in a quiescent viscous fluid and undergoing oscillations whose amplitude is comparable with its width. The structure is modeled using Euler-Bernoulli beam theory and the distributed hydrodynamic loading is described by a single complex-valued hydrodynamic function which accounts for added mass and fluid damping experienced by the structure. We perform a parametric 2D computational fluid dynamics analysis of an oscillating rigid lamina, representative of a generic beam cross section, to understand the dependence of the hydrodynamic function on the governing flow parameters. We find that increasing the frequency and amplitude of the vibration elicits vortex shedding and convection phenomena which are, in turn, responsible for nonlinear hydrodynamic damping. We establish a manageable nonlinear correction to the classical hydrodynamic function developed for small amplitude vibration and we derive a computationally efficient reduced order modal model for the beam nonlinear oscillations. Numerical and theoretical results are validated by comparison with ad hoc designed experiments on tapered beams and multimodal vibrations and with data available in the literature. Findings from this work are expected to find applications in the design of slender structures of interest in marine applications, such as biomimetic propulsion systems and energy harvesting devices.

  1. Design and modeling of magnetically driven electric-field sensor for non-contact DC voltage measurement in electric power systems.

    PubMed

    Wang, Decai; Li, Ping; Wen, Yumei

    2016-10-01

    In this paper, the design and modeling of a magnetically driven electric-field sensor for non-contact DC voltage measurement are presented. The magnetic drive structure of the sensor is composed of a small solenoid and a cantilever beam with a cylindrical magnet mounted on it. The interaction of the magnet and the solenoid provides the magnetic driving force for the sensor. Employing magnetic drive structure brings the benefits of low driving voltage and large vibrating displacement, which consequently results in less interference from the drive signal. In the theoretical analyses, the capacitance calculation model between the wire and the sensing electrode is built. The expression of the magnetic driving force is derived by the method of linear fitting. The dynamical model of the magnetic-driven cantilever beam actuator is built by using Euler-Bernoulli theory and distributed parameter method. Taking advantage of the theoretical model, the output voltage of proposed sensor can be predicted. The experimental results are in good agreement with the theoretical results. The proposed sensor shows a favorable linear response characteristic. The proposed sensor has a measuring sensitivity of 9.87 μV/(V/m) at an excitation current of 37.5 mA. The electric field intensity resolution can reach 10.13 V/m.

  2. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  3. Fabrication of a self-sensing electroactive polymer bimorph actuator based on polyvinylidene fluoride and its electrostrictive terpolymer

    NASA Astrophysics Data System (ADS)

    Engel, Leeya; Van Volkinburg, Kyle R.; Ben-David, Moti; Washington, Gregory N.; Krylov, Slava; Shacham-Diamand, Yosi

    2016-04-01

    In this paper, we report on the fabrication of a self-sensing electroactive polymer cantilevered bimorph beam actuator and its frequency response. Tip deflections of the beam, induced by applying an AC signal across ferroelectric relaxor polyvinylidene fluoride-trifluoroethylene chlorotrifluoroethylene (P(VDF-TrFE-CTFE)), reached a magnitude of 350μm under a field of ~55MV/m and were recorded externally using a laser Doppler vibrometer (LDV). Deflections were determined simultaneously by applying a sensing model to the voltage measured across the bimorph's integrated layer of piezoelectric polymer polyvinylidene fluoride (PVDF). The sensing model treats the structure as a simple Euler- Bernoulli cantilevered beam with two distributed active elements represented through the use of generalized functions and offers a method through which real time tip deflection can be measured without the need for external visualization. When not being used as a sensing element, the PVDF layer can provide an additional means for actuation of the beam via the converse piezoelectric effect, resulting in bidirectional control of the beam's deflections. Integration of flexible sensing elements together with modeling of the electroactive polymer beam can benefit the developing field of polymer microactuators which have applications in soft robotics as "smart" prosthetics/implants, haptic displays, tools for less invasive surgery, and sensing.

  4. Toward Higher-Order Mass Detection: Influence of an Adsorbate’s Rotational Inertia and Eccentricity on the Resonant Response of a Bernoulli-Euler Cantilever Beam

    PubMed Central

    Heinrich, Stephen M.; Dufour, Isabelle

    2015-01-01

    In this paper a new theoretical model is derived, the results of which permit a detailed examination of how the resonant characteristics of a cantilever are influenced by a particle (adsorbate) attached at an arbitrary position along the beam’s length. Unlike most previous work, the particle need not be small in mass or dimension relative to the beam, and the adsorbate’s geometric characteristics are incorporated into the model via its rotational inertia and eccentricity relative to the beam axis. For the special case in which the adsorbate’s (translational) mass is indeed small, an analytical solution is obtained for the particle-induced resonant frequency shift of an arbitrary flexural mode, including the effects of rotational inertia and eccentricity. This solution is shown to possess the exact first-order behavior in the normalized particle mass and represents a generalization of analytical solutions derived by others in earlier studies. The results suggest the potential for “higher-order” nanobeam-based mass detection methods by which the multi-mode frequency response reflects not only the adsorbate’s mass but also important geometric data related to its size, shape, or orientation (i.e., the mass distribution), thus resulting in more highly discriminatory techniques for discrete-mass sensing. PMID:26610493

  5. Multivariate two-part statistics for analysis of correlated mass spectrometry data from multiple biological specimens.

    PubMed

    Taylor, Sandra L; Ruhaak, L Renee; Weiss, Robert H; Kelly, Karen; Kim, Kyoungmi

    2017-01-01

    High through-put mass spectrometry (MS) is now being used to profile small molecular compounds across multiple biological sample types from the same subjects with the goal of leveraging information across biospecimens. Multivariate statistical methods that combine information from all biospecimens could be more powerful than the usual univariate analyses. However, missing values are common in MS data and imputation can impact between-biospecimen correlation and multivariate analysis results. We propose two multivariate two-part statistics that accommodate missing values and combine data from all biospecimens to identify differentially regulated compounds. Statistical significance is determined using a multivariate permutation null distribution. Relative to univariate tests, the multivariate procedures detected more significant compounds in three biological datasets. In a simulation study, we showed that multi-biospecimen testing procedures were more powerful than single-biospecimen methods when compounds are differentially regulated in multiple biospecimens but univariate methods can be more powerful if compounds are differentially regulated in only one biospecimen. We provide R functions to implement and illustrate our method as supplementary information CONTACT: sltaylor@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Beckham as physicist?

    NASA Astrophysics Data System (ADS)

    Ireson, Gren

    2001-01-01

    It is hard to think of a medium that does not use football or soccer as a means of promotion. It is also hard to think of a student who has not heard of David Beckham. If football captures the interest of students it can be used to teach physics; in this case a Beckham free-kick can be used to introduce concepts such as drag, the Bernoulli principle, Reynolds number and the Magnus effect, by asking the simple question: How does he curve the ball so much? Much basic mechanics can also be introduced along the way.

  7. Experimental Verification and Revision of the Venting Rate Model of the Hazard Assessment Computer System and the Vulnerability Model.

    DTIC Science & Technology

    1980-11-01

    discharge of a nonvolatile liquid can be ob- tained by standard Bernoulli -type relations; it is: WLo = CDA LoPL (2[PT - P-/PL + - ZLh) 1/ (1110) In all...cargo outflow momentum is low (i.e., when the net positive pressure differ- ence across the puncture is near zero). The tests showed that the water...34Benedict-Webb- Rubin Ecuation of State for Methane at Cryogenic Condi- tions," Advances -in Crvccenic ’Encineerinc., 14, po. 49-54, Plen=m Press, 1969

  8. A Method for Predicting Three-Degree-of-Freedom Store Separation Trajectories at Speeds up to the Critical Speed

    DTIC Science & Technology

    1971-07-01

    the store ]ength. If the potential is constructed on this basis and the body pressure coefficients determined from the unsteady Bernoulli equation...term has a clear momentum interpretation. The second term isSfa biovant force as will now be shown. For irrotational plane flow, we have’ L ( 1- 1 7 ) n...p m p m 1!-4. EQUATIONS FOR VORTEX STRENGTHS In writing the equations for the vortex strengths, we start first v:i.t. ecuation (11-5) for the

  9. Design Manual for Microgravity Two-Phase Flow and Heat Transfer

    DTIC Science & Technology

    1989-10-01

    simultaneous solution of two equations. One equation is a dimensionless two-.nhase momentum equation for a separated flow and the other is a dimensionless...created by the flow of the gas over a wave (the Bernoulli effect) is sufficient to lift the waves in a stratified flow to the top of the pipe. A... momentum equation to determine a dimensionless parameter related to the liquid flow rate: 14 [(Ug*Dg*)1(1J*) 2[ [ [ + - 4Y X 2 =9 k (1-16) [U *D1*] -n

  10. Structure of the oligomers obtained by enzymatic hydrolysis of the glucomannan produced by the plant Amorphophallus konjac.

    PubMed

    Cescutti, Paola; Campa, Cristiana; Delben, Franco; Rizzo, Roberto

    2002-11-29

    Dimers and trimers obtained by enzymatic hydrolysis of the glucomannan produced by the plant Amorphophallus konjac were analysed in order to obtain information on the saccharidic sequences present in the polymer. The polysaccharide was digested with cellulase and beta-mannanase and the oligomers produced were isolated by means of size-exclusion chromatography. They were structurally characterised using electrospray mass spectrometry, capillary electrophoresis, and NMR. The investigation revealed that many possible sequences were present in the polymer backbone suggesting a Bernoulli-type chain.

  11. Melde's Experiment on a Vibrating Liquid Foam Microchannel

    NASA Astrophysics Data System (ADS)

    Cohen, Alexandre; Fraysse, Nathalie; Raufaste, Christophe

    2017-12-01

    We subject a single Plateau border channel to a transverse harmonic excitation, in an experiment reminiscent of the historical one by Melde on vibrating strings, to study foam stability and wave properties. At low driving amplitudes, the liquid string exhibits regular oscillations. At large ones, a nonlinear regime appears and the acoustic radiation splits the channel into two zones of different cross section area, vibration amplitude, and phase difference with the neighboring soap films. The channel experiences an inertial dilatancy that is accounted for by a new Bernoulli-like relation.

  12. Melde's Experiment on a Vibrating Liquid Foam Microchannel.

    PubMed

    Cohen, Alexandre; Fraysse, Nathalie; Raufaste, Christophe

    2017-12-08

    We subject a single Plateau border channel to a transverse harmonic excitation, in an experiment reminiscent of the historical one by Melde on vibrating strings, to study foam stability and wave properties. At low driving amplitudes, the liquid string exhibits regular oscillations. At large ones, a nonlinear regime appears and the acoustic radiation splits the channel into two zones of different cross section area, vibration amplitude, and phase difference with the neighboring soap films. The channel experiences an inertial dilatancy that is accounted for by a new Bernoulli-like relation.

  13. Summer Study Program in Geophysical Fluid Dynamics - The Influence of Convection on Large-Scale Circulations - 1988

    DTIC Science & Technology

    1989-07-01

    the vector of the body force." lo., ,P /’P l> 16 __ __ _ __ ___P . 19 U In the first lecture we define the buoyancy force, develop a simplified...force and l’is a unit vector along the motion vector . Integrating Bernoulli’s law over a closed loop one gets: I also [ C by integrating along the...convection. It is conveiient to write these equations as evolution equations for a atate vector U(x, z, t) where x is the horizontal coordinate vector

  14. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  15. Multivariate frequency domain analysis of protein dynamics

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Fuchigami, Sotaro; Kidera, Akinori

    2009-03-01

    Multivariate frequency domain analysis (MFDA) is proposed to characterize collective vibrational dynamics of protein obtained by a molecular dynamics (MD) simulation. MFDA performs principal component analysis (PCA) for a bandpass filtered multivariate time series using the multitaper method of spectral estimation. By applying MFDA to MD trajectories of bovine pancreatic trypsin inhibitor, we determined the collective vibrational modes in the frequency domain, which were identified by their vibrational frequencies and eigenvectors. At near zero temperature, the vibrational modes determined by MFDA agreed well with those calculated by normal mode analysis. At 300 K, the vibrational modes exhibited characteristic features that were considerably different from the principal modes of the static distribution given by the standard PCA. The influences of aqueous environments were discussed based on two different sets of vibrational modes, one derived from a MD simulation in water and the other from a simulation in vacuum. Using the varimax rotation, an algorithm of the multivariate statistical analysis, the representative orthogonal set of eigenmodes was determined at each vibrational frequency.

  16. Imaging of polysaccharides in the tomato cell wall with Raman microspectroscopy

    PubMed Central

    2014-01-01

    Background The primary cell wall of fruits and vegetables is a structure mainly composed of polysaccharides (pectins, hemicelluloses, cellulose). Polysaccharides are assembled into a network and linked together. It is thought that the percentage of components and of plant cell wall has an important influence on mechanical properties of fruits and vegetables. Results In this study the Raman microspectroscopy technique was introduced to the visualization of the distribution of polysaccharides in cell wall of fruit. The methodology of the sample preparation, the measurement using Raman microscope and multivariate image analysis are discussed. Single band imaging (for preliminary analysis) and multivariate image analysis methods (principal component analysis and multivariate curve resolution) were used for the identification and localization of the components in the primary cell wall. Conclusions Raman microspectroscopy supported by multivariate image analysis methods is useful in distinguishing cellulose and pectins in the cell wall in tomatoes. It presents how the localization of biopolymers was possible with minimally prepared samples. PMID:24917885

  17. Fisher information for two gamma frailty bivariate Weibull models.

    PubMed

    Bjarnason, H; Hougaard, P

    2000-03-01

    The asymptotic properties of frailty models for multivariate survival data are not well understood. To study this aspect, the Fisher information is derived in the standard bivariate gamma frailty model, where the survival distribution is of Weibull form conditional on the frailty. For comparison, the Fisher information is also derived in the bivariate gamma frailty model, where the marginal distribution is of Weibull form.

  18. Multivariate flood risk assessment: reinsurance perspective

    NASA Astrophysics Data System (ADS)

    Ghizzoni, Tatiana; Ellenrieder, Tobias

    2013-04-01

    For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.

  19. Surrogacy assessment using principal stratification when surrogate and outcome measures are multivariate normal.

    PubMed

    Conlon, Anna S C; Taylor, Jeremy M G; Elliott, Michael R

    2014-04-01

    In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21-29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431-440). The method is applied to data from a macular degeneration study and an ovarian cancer study.

  20. Surrogacy assessment using principal stratification when surrogate and outcome measures are multivariate normal

    PubMed Central

    Conlon, Anna S. C.; Taylor, Jeremy M. G.; Elliott, Michael R.

    2014-01-01

    In clinical trials, a surrogate outcome variable (S) can be measured before the outcome of interest (T) and may provide early information regarding the treatment (Z) effect on T. Using the principal surrogacy framework introduced by Frangakis and Rubin (2002. Principal stratification in causal inference. Biometrics 58, 21–29), we consider an approach that has a causal interpretation and develop a Bayesian estimation strategy for surrogate validation when the joint distribution of potential surrogate and outcome measures is multivariate normal. From the joint conditional distribution of the potential outcomes of T, given the potential outcomes of S, we propose surrogacy validation measures from this model. As the model is not fully identifiable from the data, we propose some reasonable prior distributions and assumptions that can be placed on weakly identified parameters to aid in estimation. We explore the relationship between our surrogacy measures and the surrogacy measures proposed by Prentice (1989. Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine 8, 431–440). The method is applied to data from a macular degeneration study and an ovarian cancer study. PMID:24285772

Top