Sample records for bernoulli random variables

  1. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    PubMed

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  2. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  3. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  4. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  5. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  6. Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1

    DTIC Science & Technology

    2012-10-17

    nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the

  7. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  8. Optimal positions and parameters of translational and rotational mass dampers in beams subjected to random excitation

    NASA Astrophysics Data System (ADS)

    Łatas, Waldemar

    2018-01-01

    The problem of vibrations of the beam with the attached system of translational and rotational dynamic mass dampers subjected to random excitations with peaked power spectral densities, is presented in the hereby paper. The Euler-Bernoulli beam model is applied, while for solving the equation of motion the Galerkin method and the Laplace time transform are used. The obtained transfer functions allow to determine power spectral densities of the beam deflection and other dependent variables. Numerical examples present simple optimization problems of mass dampers parameters for local and global objective functions.

  9. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  10. Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.

    2008-11-06

    This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less

  11. Modelling of Safety Instrumented Systems by using Bernoulli trials: towards the notion of odds on for SIS failures analysis

    NASA Astrophysics Data System (ADS)

    Cauffriez, Laurent

    2017-01-01

    This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.

  12. Estimation in Linear Systems Featuring Correlated Uncertain Observations Coming from Multiple Sensors

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2009-08-01

    In this paper, the state least-squares linear estimation problem from correlated uncertain observations coming from multiple sensors is addressed. It is assumed that, at each sensor, the state is measured in the presence of additive white noise and that the uncertainty in the observations is characterized by a set of Bernoulli random variables which are only correlated at consecutive time instants. Assuming that the statistical properties of such variables are not necessarily the same for all the sensors, a recursive filtering algorithm is proposed, and the performance of the estimators is illustrated by a numerical simulation example wherein a signal is estimated from correlated uncertain observations coming from two sensors with different uncertainty characteristics.

  13. Poisson and negative binomial item count techniques for surveys with sensitive question.

    PubMed

    Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin

    2017-04-01

    Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.

  14. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  15. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Track-before-detect labeled multi-bernoulli particle filter with label switching

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, Angel F.

    2016-10-01

    This paper presents a multitarget tracking particle filter (PF) for general track-before-detect measurement models. The PF is presented in the random finite set framework and uses a labelled multi-Bernoulli approximation. We also present a label switching improvement algorithm based on Markov chain Monte Carlo that is expected to increase filter performance if targets get in close proximity for a sufficiently long time. The PF is tested in two challenging numerical examples.

  17. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta‐analysis and group level studies

    PubMed Central

    Bakbergenuly, Ilyas; Morgenthaler, Stephan

    2016-01-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062

  18. Pseudorandom number generation using chaotic true orbits of the Bernoulli map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, Asaki, E-mail: saito@fun.ac.jp; Yamaguchi, Akihiro

    We devise a pseudorandom number generator that exactly computes chaotic true orbits of the Bernoulli map on quadratic algebraic integers. Moreover, we describe a way to select the initial points (seeds) for generating multiple pseudorandom binary sequences. This selection method distributes the initial points almost uniformly (equidistantly) in the unit interval, and latter parts of the generated sequences are guaranteed not to coincide. We also demonstrate through statistical testing that the generated sequences possess good randomness properties.

  19. Beyond Bernoulli

    PubMed Central

    Donati, Fabrizio; Myerson, Saul; Bissell, Malenka M.; Smith, Nicolas P.; Neubauer, Stefan; Monaghan, Mark J.; Nordsletten, David A.

    2017-01-01

    Background— Transvalvular peak pressure drops are routinely assessed noninvasively by echocardiography using the Bernoulli principle. However, the Bernoulli principle relies on several approximations that may not be appropriate, including that the majority of the pressure drop is because of the spatial acceleration of the blood flow, and the ejection jet is a single streamline (single peak velocity value). Methods and Results— We assessed the accuracy of the Bernoulli principle to estimate the peak pressure drop at the aortic valve using 3-dimensional cardiovascular magnetic resonance flow data in 32 subjects. Reference pressure drops were computed from the flow field, accounting for the principles of physics (ie, the Navier–Stokes equations). Analysis of the pressure components confirmed that the spatial acceleration of the blood jet through the valve is most significant (accounting for 99% of the total drop in stenotic subjects). However, the Bernoulli formulation demonstrated a consistent overestimation of the transvalvular pressure (average of 54%, range 5%–136%) resulting from the use of a single peak velocity value, which neglects the velocity distribution across the aortic valve plane. This assumption was a source of uncontrolled variability. Conclusions— The application of the Bernoulli formulation results in a clinically significant overestimation of peak pressure drops because of approximation of blood flow as a single streamline. A corrected formulation that accounts for the cross-sectional profile of the blood flow is proposed and adapted to both cardiovascular magnetic resonance and echocardiographic data. PMID:28093412

  20. H∞ control for uncertain linear system over networks with Bernoulli data dropout and actuator saturation.

    PubMed

    Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping

    2018-03-01

    This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Open-closed-loop iterative learning control for a class of nonlinear systems with random data dropouts

    NASA Astrophysics Data System (ADS)

    Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH

    2018-05-01

    In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.

  2. Perception of Randomness: On the Time of Streaks

    ERIC Educational Resources Information Center

    Sun, Yanlong; Wang, Hongbin

    2010-01-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the…

  3. Impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach.

    PubMed

    Chandrasekar, A; Rakkiyappan, R; Cao, Jinde

    2015-10-01

    This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Inference for binomial probability based on dependent Bernoulli random variables with applications to meta-analysis and group level studies.

    PubMed

    Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan

    2016-07-01

    We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  5. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  6. Dynamic probability control limits for risk-adjusted Bernoulli CUSUM charts.

    PubMed

    Zhang, Xiang; Woodall, William H

    2015-11-10

    The risk-adjusted Bernoulli cumulative sum (CUSUM) chart developed by Steiner et al. (2000) is an increasingly popular tool for monitoring clinical and surgical performance. In practice, however, the use of a fixed control limit for the chart leads to a quite variable in-control average run length performance for patient populations with different risk score distributions. To overcome this problem, we determine simulation-based dynamic probability control limits (DPCLs) patient-by-patient for the risk-adjusted Bernoulli CUSUM charts. By maintaining the probability of a false alarm at a constant level conditional on no false alarm for previous observations, our risk-adjusted CUSUM charts with DPCLs have consistent in-control performance at the desired level with approximately geometrically distributed run lengths. Our simulation results demonstrate that our method does not rely on any information or assumptions about the patients' risk distributions. The use of DPCLs for risk-adjusted Bernoulli CUSUM charts allows each chart to be designed for the corresponding particular sequence of patients for a surgeon or hospital. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Colonic transit time and pressure based on Bernoulli's principle.

    PubMed

    Uno, Yoshiharu

    2018-01-01

    Variations in the caliber of human large intestinal tract causes changes in pressure and the velocity of its contents, depending on flow volume, gravity, and density, which are all variables of Bernoulli's principle. Therefore, it was hypothesized that constipation and diarrhea can occur due to changes in the colonic transit time (CTT), according to Bernoulli's principle. In addition, it was hypothesized that high amplitude peristaltic contractions (HAPC), which are considered to be involved in defecation in healthy subjects, occur because of cecum pressure based on Bernoulli's principle. A virtual healthy model (VHM), a virtual constipation model and a virtual diarrhea model were set up. For each model, the CTT was decided according to the length of each part of the colon, and then calculating the velocity due to the cecum inflow volume. In the VHM, the pressure change was calculated, then its consistency with HAPC was verified. The CTT changed according to the difference between the cecum inflow volume and the caliber of the intestinal tract, and was inversely proportional to the cecum inflow volume. Compared with VHM, the CTT was prolonged in the virtual constipation model, and shortened in the virtual diarrhea model. The calculated pressure of the VHM and the gradient of the interlocked graph were similar to that of HAPC. The CTT and HAPC can be explained by Bernoulli's principle, and constipation and diarrhea may be fundamentally influenced by flow dynamics.

  8. Dynamical Localization for Discrete Anderson Dirac Operators

    NASA Astrophysics Data System (ADS)

    Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.

    2017-04-01

    We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.

  9. Novel approaches to pin cluster synchronization on complex dynamical networks in Lur'e forms

    NASA Astrophysics Data System (ADS)

    Tang, Ze; Park, Ju H.; Feng, Jianwen

    2018-04-01

    This paper investigates the cluster synchronization of complex dynamical networks consisted of identical or nonidentical Lur'e systems. Due to the special topology structure of the complex networks and the existence of stochastic perturbations, a kind of randomly occurring pinning controller is designed which not only synchronizes all Lur'e systems in the same cluster but also decreases the negative influence among different clusters. Firstly, based on an extended integral inequality, the convex combination theorem and S-procedure, the conditions for cluster synchronization of identical Lur'e networks are derived in a convex domain. Secondly, randomly occurring adaptive pinning controllers with two independent Bernoulli stochastic variables are designed and then sufficient conditions are obtained for the cluster synchronization on complex networks consisted of nonidentical Lur'e systems. In addition, suitable control gains for successful cluster synchronization of nonidentical Lur'e networks are acquired by designing some adaptive updating laws. Finally, we present two numerical examples to demonstrate the validity of the control scheme and the theoretical analysis.

  10. On chemical distances and shape theorems in percolation models with long-range correlations

    NASA Astrophysics Data System (ADS)

    Drewitz, Alexander; Ráth, Balázs; Sapozhnikov, Artëm

    2014-08-01

    In this paper, we provide general conditions on a one parameter family of random infinite subsets of {{Z}}^d to contain a unique infinite connected component for which the chemical distances are comparable to the Euclidean distance. In addition, we show that these conditions also imply a shape theorem for the corresponding infinite connected component. By verifying these conditions for specific models, we obtain novel results about the structure of the infinite connected component of the vacant set of random interlacements and the level sets of the Gaussian free field. As a byproduct, we obtain alternative proofs to the corresponding results for random interlacements in the work of Černý and Popov ["On the internal distance in the interlacement set," Electron. J. Probab. 17(29), 1-25 (2012)], and while our main interest is in percolation models with long-range correlations, we also recover results in the spirit of the work of Antal and Pisztora ["On the chemical distance for supercritical Bernoulli percolation," Ann Probab. 24(2), 1036-1048 (1996)] for Bernoulli percolation. Finally, as a corollary, we derive new results about the (chemical) diameter of the largest connected component in the complement of the trace of the random walk on the torus.

  11. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  12. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  13. The Modelling of Axially Translating Flexible Beams

    NASA Astrophysics Data System (ADS)

    Theodore, R. J.; Arakeri, J. H.; Ghosal, A.

    1996-04-01

    The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.

  14. Event-triggered resilient filtering with stochastic uncertainties and successive packet dropouts via variance-constrained approach

    NASA Astrophysics Data System (ADS)

    Jia, Chaoqing; Hu, Jun; Chen, Dongyan; Liu, Yurong; Alsaadi, Fuad E.

    2018-07-01

    In this paper, we discuss the event-triggered resilient filtering problem for a class of time-varying systems subject to stochastic uncertainties and successive packet dropouts. The event-triggered mechanism is employed with hope to reduce the communication burden and save network resources. The stochastic uncertainties are considered to describe the modelling errors and the phenomenon of successive packet dropouts is characterized by a random variable obeying the Bernoulli distribution. The aim of the paper is to provide a resilient event-based filtering approach for addressed time-varying systems such that, for all stochastic uncertainties, successive packet dropouts and filter gain perturbation, an optimized upper bound of the filtering error covariance is obtained by designing the filter gain. Finally, simulations are provided to demonstrate the effectiveness of the proposed robust optimal filtering strategy.

  15. Bernoulli's Principle

    ERIC Educational Resources Information Center

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  16. Exploring the Sums of Powers of Consecutive q-Integers

    ERIC Educational Resources Information Center

    Kim, T.; Ryoo, C. S.; Jang, L. C.; Rim, S. H.

    2005-01-01

    The Bernoulli numbers are among the most interesting and important number sequences in mathematics. They first appeared in the posthumous work "Ars Conjectandi" (1713) by Jacob Bernoulli (1654-1705) in connection with sums of powers of consecutive integers (Bernoulli, 1713; or Smith, 1959). Bernoulli numbers are particularly important in number…

  17. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  18. Interpretation of Bernoulli's Equation.

    ERIC Educational Resources Information Center

    Bauman, Robert P.; Schwaneberg, Rolf

    1994-01-01

    Discusses Bernoulli's equation with regards to: horizontal flow of incompressible fluids, change of height of incompressible fluids, gases, liquids and gases, and viscous fluids. Provides an interpretation, properties, terminology, and applications of Bernoulli's equation. (MVL)

  19. A Survival Model for Shortleaf Pine Tress Growing in Uneven-Aged Stands

    Treesearch

    Thomas B. Lynch; Lawrence R. Gering; Michael M. Huebschmann; Paul A. Murphy

    1999-01-01

    A survival model for shortleaf pine (Pinus echinata Mill.) trees growing in uneven-aged stands was developed using data from permanently established plots maintained by an industrial forestry company in western Arkansas. Parameters were fitted to a logistic regression model with a Bernoulli dependent variable in which "0" represented...

  20. Dynamical Localization for Discrete and Continuous Random Schrödinger Operators

    NASA Astrophysics Data System (ADS)

    Germinet, F.; De Bièvre, S.

    We show for a large class of random Schrödinger operators Ho on and on that dynamical localization holds, i.e. that, with probability one, for a suitable energy interval I and for q a positive real, Here ψ is a function of sufficiently rapid decrease, and PI(Ho) is the spectral projector of Ho corresponding to the interval I. The result is obtained through the control of the decay of the eigenfunctions of Ho and covers, in the discrete case, the Anderson tight-binding model with Bernoulli potential (dimension ν = 1) or singular potential (ν > 1), and in the continuous case Anderson as well as random Landau Hamiltonians.

  1. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  2. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  3. Bernoulli's Principle: Science as a Human Endeavor

    ERIC Educational Resources Information Center

    McCarthy, Deborah

    2008-01-01

    What do the ideas of Daniel Bernoulli--an 18th-century Swiss mathematician, physicist, natural scientist, and professor--and your students' next landing of the space shuttle via computer simulation have in common? Because of his contribution, referred in physical science as Bernoulli's principle, modern flight is possible. The mini learning-cycle…

  4. Risk-adjusted monitoring of survival times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.

    2009-02-26

    We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is moremore » efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.« less

  5. Stability analysis for discrete-time stochastic memristive neural networks with both leakage and probabilistic delays.

    PubMed

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E

    2018-06-01

    This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Self-affirmation model for football goal distributions

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2007-06-01

    Analyzing football score data with statistical techniques, we investigate how the highly co-operative nature of the game is reflected in averaged properties such as the distributions of scored goals for the home and away teams. It turns out that in particular the tails of the distributions are not well described by independent Bernoulli trials, but rather well modeled by negative binomial or generalized extreme value distributions. To understand this behavior from first principles, we suggest to modify the Bernoulli random process to include a simple component of self-affirmation which seems to describe the data surprisingly well and allows to interpret the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments and found the proposed models to be applicable rather universally. In particular, here we compare men's and women's leagues and the separate German leagues during the cold war times and find some remarkable differences.

  7. Dissipative advective accretion disc solutions with variable adiabatic index around black holes

    NASA Astrophysics Data System (ADS)

    Kumar, Rajiv; Chattopadhyay, Indranil

    2014-10-01

    We investigated accretion on to black holes in presence of viscosity and cooling, by employing an equation of state with variable adiabatic index and multispecies fluid. We obtained the expression of generalized Bernoulli parameter which is a constant of motion for an accretion flow in presence of viscosity and cooling. We obtained all possible transonic solutions for a variety of boundary conditions, viscosity parameters and accretion rates. We identified the solutions with their positions in the parameter space of generalized Bernoulli parameter and the angular momentum on the horizon. We showed that a shocked solution is more luminous than a shock-free one. For particular energies and viscosity parameters, we obtained accretion disc luminosities in the range of 10- 4 - 1.2 times Eddington luminosity, and the radiative efficiency seemed to increase with the mass accretion rate too. We found steady state shock solutions even for high-viscosity parameters, high accretion rates and for wide range of composition of the flow, starting from purely electron-proton to lepton-dominated accretion flow. However, similar to earlier studies of inviscid flow, accretion shock was not obtained for electron-positron pair plasma.

  8. A Short History of Probability Theory and Its Applications

    ERIC Educational Resources Information Center

    Debnath, Lokenath; Basu, Kanadpriya

    2015-01-01

    This paper deals with a brief history of probability theory and its applications to Jacob Bernoulli's famous law of large numbers and theory of errors in observations or measurements. Included are the major contributions of Jacob Bernoulli and Laplace. It is written to pay the tricentennial tribute to Jacob Bernoulli, since the year 2013…

  9. Linear stochastic Schrödinger equations in terms of quantum Bernoulli noises

    NASA Astrophysics Data System (ADS)

    Chen, Jinshu; Wang, Caishi

    2017-05-01

    Quantum Bernoulli noises (QBN) are the family of annihilation and creation operators acting on Bernoulli functionals, which satisfy a canonical anti-commutation relation. In this paper, we study linear stochastic Schrödinger equations (LSSEs) associated with QBN in the space of square integrable complex-valued Bernoulli functionals. We first rigorously prove a formula concerning the number operator N on Bernoulli functionals. And then, by using this formula as well as Mora and Rebolledo's results on a general LSSE [C. M. Mora and R. Rebolledo, Infinite. Dimens. Anal. Quantum Probab. Relat. Top. 10, 237-259 (2007)], we obtain an easily checking condition for a LSSE associated with QBN to have a unique Nr-strong solution of mean square norm conservation for given r ≥0 . Finally, as an application of this condition, we examine a special class of LSSEs associated with QBN and some further results are proven.

  10. Who Solved the Bernoulli Differential Equation and How Did They Do It?

    ERIC Educational Resources Information Center

    Parker, Adam E.

    2013-01-01

    The Bernoulli brothers, Jacob and Johann, and Leibniz: Any of these might have been first to solve what is called the Bernoulli differential equation. We explore their ideas and the chronology of their work, finding out, among other things, that variation of parameters was used in 1697, 78 years before 1775, when Lagrange introduced it in general.

  11. Bernoulli in the operating room: from the perspective of a cardiac surgeon.

    PubMed

    Matt, Peter

    2014-12-01

    The Bernoullis were one of the most distinguished families in the history of science. It was Daniel Bernoulli who applied mathematical physics to medicine to further his understanding of physiological mechanisms that have an impact even in today's high-end medicine. His masterwork was the analysis of fluid dynamics, which resulted in Bernoulli's law. Most important for cardiac surgery, it describes how a centrifugal pump works within an extracorporeal circulation, lays the basis for measuring a gradient over a stenotic heart valve, and explains how to measure the transit time flow within a bypass graft. Georg Thieme Verlag KG Stuttgart · New York.

  12. Evaluation of aerodynamic characteristics of a coupled fluid-structure system using generalized Bernoulli's principle: An application to vocal folds vibration.

    PubMed

    Zhang, Lucy T; Yang, Jubiao

    2016-12-01

    In this work we explore the aerodynamics flow characteristics of a coupled fluid-structure interaction system using a generalized Bernoulli equation derived directly from the Cauchy momentum equations. Unlike the conventional Bernoulli equation where incompressible, inviscid, and steady flow conditions are assumed, this generalized Bernoulli equation includes the contributions from compressibility, viscous, and unsteadiness, which could be essential in defining aerodynamic characteristics. The application of the derived Bernoulli's principle is on a fully-coupled fluid-structure interaction simulation of the vocal folds vibration. The coupled system is simulated using the immersed finite element method where compressible Navier-Stokes equations are used to describe the air and an elastic pliable structure to describe the vocal fold. The vibration of the vocal fold works to open and close the glottal flow. The aerodynamics flow characteristics are evaluated using the derived Bernoulli's principles for a vibration cycle in a carefully partitioned control volume based on the moving structure. The results agree very well to experimental observations, which validate the strategy and its use in other types of flow characteristics that involve coupled fluid-structure interactions.

  13. Bifurcation of rupture path by linear and cubic damping force

    NASA Astrophysics Data System (ADS)

    Dennis L. C., C.; Chew X., Y.; Lee Y., C.

    2014-06-01

    Bifurcation of rupture path is studied for the effect of linear and cubic damping. Momentum equation with Rayleigh factor was transformed into ordinary differential form. Bernoulli differential equation was obtained and solved by the separation of variables. Analytical or exact solutions yielded the bifurcation was visible at imaginary part when the wave was non dispersive. For the dispersive wave, bifurcation of rupture path was invisible.

  14. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  15. Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.

    PubMed

    Imamverdiyev, Yadigar; Abdullayeva, Fargana

    2018-06-01

    In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.

  16. Calculation of upper confidence bounds on proportion of area containing not-sampled vegetation types: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2011-01-01

    This paper explores the information forest inventory data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977)....

  17. A Depth-Averaged 2-D Simulation for Coastal Barrier Breaching Processes

    DTIC Science & Technology

    2011-05-01

    including bed change and variable flow density in the flow continuity and momentum equations. The model adopts the HLL approximate Riemann solver to handle...flow density in the flow continuity and momentum equations. The model adopts the HLL approximate Riemann solver to handle the mixed-regime flows near...18 547 Keulegan equation or the Bernoulli equation, and the breach morphological change is determined using simplified sediment transport models

  18. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  19. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  20. Multi stage unreliable retrial Queueing system with Bernoulli vacation

    NASA Astrophysics Data System (ADS)

    Radha, J.; Indhira, K.; Chandrasekaran, V. M.

    2017-11-01

    In this work we considered the Bernoulli vacation in group arrival retrial queues with unreliable server. Here, a server providing service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th, (i=1,2,…k) stage of service, the customer may go to (i+1)th stage with probability θi , or leave the system with probability qi = 1 - θi , (i = 1,2,…k - 1) and qi = 1, (i = k). The server may enjoy vacation (orbit is empty or not) with probability v after finishing the service or continuing the service with probability 1-v. After finishing the vacation, the server search for the customer in the orbit with probability θ or remains idle for new arrival with probability 1-θ. We analyzed the system using the method of supplementary variable.

  1. Human-Swarm Interactions Based on Managing Attractors

    DTIC Science & Technology

    2014-03-01

    means that agent j is visible to agent i at time t. Each aij(t) is determined at time t according to a Bernoulli random vari- able with parameter pij(t...angu- lar momentum , mgroup, and group polarization, pgroup [9, 17]. The mgroup is a measure of the degree of rotation of the group about its centroid...0.1 seconds. 91 (a) (b) Figure 2: The group momentum and polarization as the radius of orientation is increased and decreased. 3. ATTRACTORS AND

  2. Theoretical study on a Miniature Joule-Thomson & Bernoulli Cryocooler

    NASA Astrophysics Data System (ADS)

    Xiong, L. Y.; Kaiser, G.; Binneberg, A.

    2004-11-01

    In this paper, a microchannel-based cryocooler consisting of a compressor, a recuperator and a cold heat exchanger has been developed to study the feasibility of cryogenic cooling by the use of Joule-Thomson effect and Bernoulli effect. A set of governing equations including Bernoulli equations and energy equations are introduced and the performance of the cooler is calculated. The influences of some working conditions and structure parameters on the performance of coolers are discussed in details.

  3. Beltrami–Bernoulli equilibria in plasmas with degenerate electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berezhiani, V. I., E-mail: vazhab@yahoo.com; Shatashvili, N. L., E-mail: shatash@ictp.it; Mahajan, S. M., E-mail: mahajan@mail.utexas.edu

    2015-02-15

    A new class of Double Beltrami–Bernoulli equilibria, sustained by electron degeneracy pressure, is investigated. It is shown that due to electron degeneracy, a nontrivial Beltrami–Bernoulli equilibrium state is possible even for a zero temperature plasma. These states are, conceptually, studied to show the existence of new energy transformation pathways converting, for instance, the degeneracy energy into fluid kinetic energy. Such states may be of relevance to compact astrophysical objects like white dwarfs, neutron stars, etc.

  4. A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weerakkody, Sean; Ozel, Omur; Sinopoli, Bruno

    We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the othermore » hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.« less

  5. A generalized form of the Bernoulli Trial collision scheme in DSMC: Derivation and evaluation

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan; Shoja-Sani, Ahmad; Ejraei, Hossein

    2018-02-01

    The impetus of this research is to present a generalized Bernoulli Trial collision scheme in the context of the direct simulation Monte Carlo (DSMC) method. Previously, a subsequent of several collision schemes have been put forward, which were mathematically based on the Kac stochastic model. These include Bernoulli Trial (BT), Ballot Box (BB), Simplified Bernoulli Trial (SBT) and Intelligent Simplified Bernoulli Trial (ISBT) schemes. The number of considered pairs for a possible collision in the above-mentioned schemes varies between N (l) (N (l) - 1) / 2 in BT, 1 in BB, and (N (l) - 1) in SBT or ISBT, where N (l) is the instantaneous number of particles in the lth cell. Here, we derive a generalized form of the Bernoulli Trial collision scheme (GBT) where the number of selected pairs is any desired value smaller than (N (l) - 1), i.e., Nsel < (N (l) - 1), keeping the same the collision frequency and accuracy of the solution as the original SBT and BT models. We derive two distinct formulas for the GBT scheme, where both formula recover BB and SBT limits if Nsel is set as 1 and N (l) - 1, respectively, and provide accurate solutions for a wide set of test cases. The present generalization further improves the computational efficiency of the BT-based collision models compared to the standard no time counter (NTC) and nearest neighbor (NN) collision models.

  6. Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.

    PubMed

    Campos, Daniel G

    2018-02-01

    This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Quantum Markov semigroups constructed from quantum Bernoulli noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Caishi; Chen, Jinshu

    2016-02-15

    Quantum Bernoulli noises (QBNs) are the family of annihilation and creation operators acting on Bernoulli functionals, which can describe a two-level quantum system with infinitely many sites. In this paper, we consider the problem to construct quantum Markov semigroups (QMSs) directly from QBNs. We first establish several new theorems concerning QBNs. In particular, we define the number operator acting on Bernoulli functionals by using the canonical orthonormal basis, prove its self-adjoint property, and describe precisely its connections with QBN in a mathematically rigorous way. We then show the possibility to construct QMS directly from QBN. This is done by combiningmore » the general results on QMS with our new results on QBN obtained here. Finally, we examine some properties of QMS constructed from QBN.« less

  8. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Treesearch

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  9. Fragility Analysis of a Concrete Gravity Dam Embedded in Rock and Its System Response Curve Computed by the Analytical Program GDLAD_Foundation

    DTIC Science & Technology

    2012-06-01

    According to the Bernoulli equation for ideal flows, i.e. steady, frictionless, incompressible flows, the total head, H, at any point can be determined...centerline and using the Bernoulli equation for ideal flow with an assumption that the velocity is small, the total head equals the pressure head...the Bernoulli equation for ideal flows, i.e. steady, frictionless, incompressible flows, the total head, H, at any point can be determined by

  10. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  11. Perception of randomness: On the time of streaks.

    PubMed

    Sun, Yanlong; Wang, Hongbin

    2010-12-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the most delayed pattern for its first occurrence. It is argued that when time is of essence, how often a pattern is to occur (mean time, or, frequency) and when a pattern is to first occur (waiting time) are different questions and bear different psychological relevance. The waiting time statistics may provide a quantitative measure to the psychological distance when people are expecting a probabilistic event, and such measure is consistent with both of the representativeness and availability heuristics in people's perception of randomness. We discuss some of the recent empirical findings and suggest that people's judgment and generation of random sequences may be guided by their actual experiences of the waiting time statistics. Published by Elsevier Inc.

  12. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    PubMed

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  13. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    PubMed Central

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.

    2017-01-01

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796

  14. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    PubMed

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  15. Proceedings of the Annual Symposium on Frequency Control (33rd) Held in Atlantic City, New Jersey on 30 May-1 June 1979

    DTIC Science & Technology

    1979-01-01

    from the Bernoullis was Daniel Bernoulli’s n’est pas la meme dans tous les sens", Exercices addition of the acceleration term to the beam e- de Math...frequencies). improved during 1811-1816 by Germain and Lagrange and, finally, the correct derivation was produced 1852 G. Lame, "Leqons sur la ...de la re- tropic membranes and plates (low frequencies) sistance des solides et des solides d’egale by Euler, Jacques Bernoulli, Germin, Lagrange

  16. Effect of elastic boundaries in hydrostatic problems

    NASA Astrophysics Data System (ADS)

    Volobuev, A. N.; Tolstonogov, A. P.

    2010-03-01

    The possibility and conditions of use of the Bernoulli equation for description of an elastic pipeline were considered. It is shown that this equation is identical in form to the Bernoulli equation used for description of a rigid pipeline. It has been established that the static pressure entering into the Bernoulli equation is not identical to the pressure entering into the impulse-momentum equation. The hydrostatic problem on the pressure distribution over the height of a beaker with a rigid bottom and elastic walls, filled with a liquid, was solved.

  17. An Illustration of the Bernoulli Effect With a Rubber Tube

    ERIC Educational Resources Information Center

    Hanson, M. J.

    1973-01-01

    Describes a simple method of demonstrating the Bernoulli effect, by spinning a length of rubber tubing around one's head. A manometer attached to the stationary end of the tube indicates a reduction in pressure. (JR)

  18. THE BERNOULLI EQUATION AND COMPRESSIBLE FLOW THEORIES

    EPA Science Inventory

    The incompressible Bernoulli equation is an analytical relationship between pressure, kinetic energy, and potential energy. As perhaps the simplest and most useful statement for describing laminar flow, it buttresses numerous incompressible flow models that have been developed ...

  19. Geometrical study of phyllotactic patterns by Bernoulli spiral lattices.

    PubMed

    Sushida, Takamichi; Yamagishi, Yoshikazu

    2017-06-01

    Geometrical studies of phyllotactic patterns deal with the centric or cylindrical models produced by ideal lattices. van Iterson (Mathematische und mikroskopisch - anatomische Studien über Blattstellungen nebst Betrachtungen über den Schalenbau der Miliolinen, Verlag von Gustav Fischer, Jena, 1907) suggested a centric model representing ideal phyllotactic patterns as disk packings of Bernoulli spiral lattices and presented a phase diagram now called Van Iterson's diagram explaining the bifurcation processes of their combinatorial structures. Geometrical properties on disk packings were shown by Rothen & Koch (J. Phys France, 50(13), 1603-1621, 1989). In contrast, as another centric model, we organized a mathematical framework of Voronoi tilings of Bernoulli spiral lattices and showed mathematically that the phase diagram of a Voronoi tiling is graph-theoretically dual to Van Iterson's diagram. This paper gives a review of two centric models for disk packings and Voronoi tilings of Bernoulli spiral lattices. © 2017 Japanese Society of Developmental Biologists.

  20. Hydraulic jump and Bernoulli equation in nonlinear shallow water model

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Yih

    2018-06-01

    A shallow water model was applied to study the hydraulic jump and Bernoulli equation across the jump. On a flat terrain, when a supercritical flow plunges into a subcritical flow, discontinuity develops on velocity and Bernoulli function across the jump. The shock generated by the obstacle may propagate downstream and upstream. The latter reflected from the inflow boundary, moves downstream and leaves the domain. Before the reflected wave reaching the obstacle, the short-term integration (i.e., quasi-steady) simulations agree with Houghton and Kasahara's results, which may have unphysical complex solutions. The quasi-steady flow is quickly disturbed by the reflected wave, finally, flow reaches steady and becomes critical without complex solutions. The results also indicate that Bernoulli function is discontinuous but the potential of mass flux remains constant across the jump. The latter can be used to predict velocity/height in a steady flow.

  1. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  2. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  3. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  4. Vertical dynamics of a single-span beam subjected to moving mass-suspended payload system with variable speeds

    NASA Astrophysics Data System (ADS)

    He, Wei

    2018-03-01

    This paper presents the vertical dynamics of a simply supported Euler-Bernoulli beam subjected to a moving mass-suspended payload system of variable velocities. A planar theoretical model of the moving mass-suspended payload system of variable speeds is developed based on several assumptions: the rope is massless and rigid, and its length keeps constant; the stiffness of the gantry beam is much greater than the supporting beam, and the gantry beam can be treated as a mass particle traveling along the supporting beam; the supporting beam is assumed as a simply supported Bernoulli-Euler beam. The model can be degenerated to consider two classical cases-the moving mass case and the moving payload case. The proposed model is verified using both numerical and experimental methods. To further investigate the effect of possible influential factors, numerical examples are conducted covering a range of parameters, such as variable speeds (acceleration or deceleration), mass ratios of the payload to the total moving load, and the pendulum lengths. The effect of beam flexibility on swing response of the payload is also investigated. It is shown that the effect of a variable speed is significant for the deflections of the beam. The accelerating movement tends to induce larger beam deflections, while the decelerating movement smaller ones. For accelerating or decelerating movements, the moving mass model may underestimate the deflections of the beam compared with the presented model; while for uniform motion, both the moving mass model and the moving mass-payload model lead to same beam responses. Furthermore, it is observed that the swing response of the payload is not sensitive to the stiffness of the beam for operational cases of a moving crane, thus a simple moving payload model can be employed in the swing control of the payload.

  5. Bernoulli's Principle Applied to Brain Fluids: Intracranial Pressure Does Not Drive Cerebral Perfusion or CSF Flow.

    PubMed

    Schmidt, Eric; Ros, Maxime; Moyse, Emmanuel; Lorthois, Sylvie; Swider, Pascal

    2016-01-01

    In line with the first law of thermodynamics, Bernoulli's principle states that the total energy in a fluid is the same at all points. We applied Bernoulli's principle to understand the relationship between intracranial pressure (ICP) and intracranial fluids. We analyzed simple fluid physics along a tube to describe the interplay between pressure and velocity. Bernoulli's equation demonstrates that a fluid does not flow along a gradient of pressure or velocity; a fluid flows along a gradient of energy from a high-energy region to a low-energy region. A fluid can even flow against a pressure gradient or a velocity gradient. Pressure and velocity represent part of the total energy. Cerebral blood perfusion is not driven by pressure but by energy: the blood flows from high-energy to lower-energy regions. Hydrocephalus is related to increased cerebrospinal fluid (CSF) resistance (i.e., energy transfer) at various points. Identification of the energy transfer within the CSF circuit is important in understanding and treating CSF-related disorders. Bernoulli's principle is not an abstract concept far from clinical practice. We should be aware that pressure is easy to measure, but it does not induce resumption of fluid flow. Even at the bedside, energy is the key to understanding ICP and fluid dynamics.

  6. Elementary Hemodynamic Principles Based on Modified Bernoulli's Equation.

    ERIC Educational Resources Information Center

    Badeer, Henry S.

    1985-01-01

    Develops and expands basic concepts of Bernoulli's equation as it applies to vascular hemodynamics. Simple models are used to illustrate gravitational potential energy, steady nonturbulent flow, pump-driven streamline flow, and other areas. Relationships to the circulatory system are also discussed. (DH)

  7. Bernoulli? Perhaps, but What about Viscosity?

    ERIC Educational Resources Information Center

    Eastwell, Peter

    2007-01-01

    Bernoulli's principle is being misunderstood and consequently misused. This paper clarifies the issues involved, hypothesises as to how this unfortunate situation has arisen, provides sound explanations for many everyday phenomena involving moving air, and makes associated recommendations for teaching the effects of moving fluids.

  8. A Revelation: Quantum-Statistics and Classical-Statistics are Analytic-Geometry Conic-Sections and Numbers/Functions: Euler, Riemann, Bernoulli Generating-Functions: Conics to Numbers/Functions Deep Subtle Connections

    NASA Astrophysics Data System (ADS)

    Descartes, R.; Rota, G.-C.; Euler, L.; Bernoulli, J. D.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Quantum-statistics Dichotomy: Fermi-Dirac(FDQS) Versus Bose-Einstein(BEQS), respectively with contact-repulsion/non-condensation(FDCR) versus attraction/ condensationBEC are manifestly-demonstrated by Taylor-expansion ONLY of their denominator exponential, identified BOTH as Descartes analytic-geometry conic-sections, FDQS as Elllipse (homotopy to rectangle FDQS distribution-function), VIA Maxwell-Boltzmann classical-statistics(MBCS) to Parabola MORPHISM, VS. BEQS to Hyperbola, Archimedes' HYPERBOLICITY INEVITABILITY, and as well generating-functions[Abramowitz-Stegun, Handbook Math.-Functions--p. 804!!!], respectively of Euler-numbers/functions, (via Riemann zeta-function(domination of quantum-statistics: [Pathria, Statistical-Mechanics; Huang, Statistical-Mechanics]) VS. Bernoulli-numbers/ functions. Much can be learned about statistical-physics from Euler-numbers/functions via Riemann zeta-function(s) VS. Bernoulli-numbers/functions [Conway-Guy, Book of Numbers] and about Euler-numbers/functions, via Riemann zeta-function(s) MORPHISM, VS. Bernoulli-numbers/ functions, visa versa!!! Ex.: Riemann-hypothesis PHYSICS proof PARTLY as BEQS BEC/BEA!!!

  9. Improved implementation of the risk-adjusted Bernoulli CUSUM chart to monitor surgical outcome quality.

    PubMed

    Keefe, Matthew J; Loda, Justin B; Elhabashy, Ahmad E; Woodall, William H

    2017-06-01

    The traditional implementation of the risk-adjusted Bernoulli cumulative sum (CUSUM) chart for monitoring surgical outcome quality requires waiting a pre-specified period of time after surgery before incorporating patient outcome information. We propose a simple but powerful implementation of the risk-adjusted Bernoulli CUSUM chart that incorporates outcome information as soon as it is available, rather than waiting a pre-specified period of time after surgery. A simulation study is presented that compares the performance of the traditional implementation of the risk-adjusted Bernoulli CUSUM chart to our improved implementation. We show that incorporating patient outcome information as soon as it is available leads to quicker detection of process deterioration. Deterioration of surgical performance could be detected much sooner using our proposed implementation, which could lead to the earlier identification of problems. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  10. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  11. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  12. Curve Balls, Airplane Wings, and Prairie Dog Holes.

    ERIC Educational Resources Information Center

    Barnes, George B.

    1984-01-01

    Describes activities involving Bernoulli's principle which allows students to experience the difference between knowledge and scientific understanding. Explanations for each of the activities (using such materials as wooden spools, straws, soda bottles and table tennis balls) and explanations of phenomena in terms of Bernoulli's are provided. (BC)

  13. Evaluating the Risks: A Bernoulli Process Model of HIV Infection and Risk Reduction.

    ERIC Educational Resources Information Center

    Pinkerton, Steven D.; Abramson, Paul R.

    1993-01-01

    A Bernoulli process model of human immunodeficiency virus (HIV) is used to evaluate infection risks associated with various sexual behaviors (condom use, abstinence, or monogamy). Results suggest that infection is best mitigated through measures that decrease infectivity, such as condom use. (SLD)

  14. A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.

    PubMed

    Read, S; Bath, P A; Willett, P; Maheswaran, R

    2013-08-30

    The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Testing Bernoulli's Law

    ERIC Educational Resources Information Center

    Ivanov, Dragia; Nikolov, Stefan; Petrova, Hristina

    2014-01-01

    In this paper we present three different methods for testing Bernoulli's law that are different from the standard "tube with varying cross-section." They are all applicable to high-school level physics education, with varying levels of theoretical and experimental complexity, depending on students' skills, and may even be…

  16. Generalization of the Bernoulli ODE

    ERIC Educational Resources Information Center

    Azevedo, Douglas; Valentino, Michele C.

    2017-01-01

    In this note, we propose a generalization of the famous Bernoulli differential equation by introducing a class of nonlinear first-order ordinary differential equations (ODEs). We provide a family of solutions for this introduced class of ODEs and also we present some examples in order to illustrate the applications of our result.

  17. Energy efficiency analysis of the manipulation process by the industrial objects with the use of Bernoulli gripping devices

    NASA Astrophysics Data System (ADS)

    Savkiv, Volodymyr; Mykhailyshyn, Roman; Duchon, Frantisek; Mikhalishin, Mykhailo

    2017-11-01

    The article deals with the topical issue of reducing energy consumption for transportation of industrial objects. The energy efficiency of the process of objects manipulation with the use of the orientation optimization method while gripping with the help of different methods has been studied. The analysis of the influence of the constituent parts of inertial forces, that affect the object of manipulation, on the necessary force characteristics and energy consumption of Bernoulli gripping device has been proposed. The economic efficiency of the use of the optimal orientation of Bernoulli gripping device while transporting the object of manipulation in comparison to the transportation without re-orientation has been proved.

  18. Thinking About Bernoulli

    NASA Astrophysics Data System (ADS)

    Kamela, Martin

    2007-09-01

    One of the most fun demonstrations in a freshman mechanics class is the levitation of a ball in a steady air stream even when the jet is directed at an angle. This and other demonstrations are often used to argue for the validity of Bernoulli's principle. As cautioned by some authors,2-4 however, it is important to avoid making sweeping statements such as "high speed implies lower pressure" with respect to interpreting the popular demonstrations. In this paper I present a demonstration that can be used in conjunction with the discussion of Bernoulli's principle to encourage students to consider assumptions carefully. Specifically, it shows that a correlation of high speed with lower fluid pressure is not true in general.

  19. Vibrations of an Euler-Bernoulli beam with hysteretic damping arising from dispersed frictional microcracks

    NASA Astrophysics Data System (ADS)

    Maiti, Soumyabrata; Bandyopadhyay, Ritwik; Chatterjee, Anindya

    2018-01-01

    We study free and harmonically forced vibrations of an Euler-Bernoulli beam with rate-independent hysteretic dissipation. The dissipation follows a model proposed elsewhere for materials with randomly dispersed frictional microcracks. The virtual work of distributed dissipative moments is approximated using Gaussian quadrature, yielding a few discrete internal hysteretic states. Lagrange's equations are obtained for the modal coordinates. Differential equations for the modal coordinates and internal states are integrated together. Free vibrations decay exponentially when a single mode dominates. With multiple modes active, higher modes initially decay rapidly while lower modes decay relatively slowly. Subsequently, lower modes show their own characteristic modal damping, while small amplitude higher modes show more erratic decay. Large dissipation, for the adopted model, leads mathematically to fast and damped oscillations in the limit, unlike viscously overdamped systems. Next, harmonically forced, lightly damped responses of the beam are studied using both a slow frequency sweep and a shooting-method based search for periodic solutions along with numerical continuation. Shooting method and frequency sweep results match for large ranges of frequency. The shooting method struggles near resonances, where internal states collapse into lower dimensional behavior and Newton-Raphson iterations fail. Near the primary resonances, simple numerically-aided harmonic balance gives excellent results. Insights are also obtained into the harmonic content of secondary resonances.

  20. Classic Bernoulli's Principle Derivation and Its Working Hypotheses

    ERIC Educational Resources Information Center

    Marciotto, Edson R.

    2016-01-01

    The Bernoulli's principle states that the quantity p+ pgz + pv[superscript 2]/2 must be conserved in a streamtube if some conditions are matched, namely: steady and irrotational flow of an inviscid and incompressible fluid. In most physics textbooks this result is demonstrated invoking the energy conservation of a fluid material volume at two…

  1. Alternate Solution to Generalized Bernoulli Equations via an Integrating Factor: An Exact Differential Equation Approach

    ERIC Educational Resources Information Center

    Tisdell, C. C.

    2017-01-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…

  2. Two Identities for the Bernoulli-Euler Numbers

    ERIC Educational Resources Information Center

    Gauthier, N.

    2008-01-01

    Two identities for the Bernoulli and for the Euler numbers are derived. These identities involve two special cases of central combinatorial numbers. The approach is based on a set of differential identities for the powers of the secant. Generalizations of the Mittag-Leffler series for the secant are introduced and used to obtain closed-form…

  3. Thinking about Bernoulli

    ERIC Educational Resources Information Center

    Kamela, Martin

    2007-01-01

    One of the most fun demonstrations in a freshman mechanics class is the levitation of a ball in a steady air stream even when the jet is directed at an angle. This and other demonstrations are often used to argue for the validity of Bernoulli's principle. As cautioned by some authors, however, it is important to avoid making sweeping statements…

  4. The Bernoulli Equation in a Moving Reference Frame

    ERIC Educational Resources Information Center

    Mungan, Carl E.

    2011-01-01

    Unlike other standard equations in introductory classical mechanics, the Bernoulli equation is not Galilean invariant. The explanation is that, in a reference frame moving with respect to constrictions or obstacles, those surfaces do work on the fluid, constituting an extra term that needs to be included in the work-energy calculation. A…

  5. [Work, momentum and fatigue in the work of Daniel Bernoulli: toward the optimization of biological fact].

    PubMed

    Fonteneau, Yannick; Viard, Jérôme

    The concept of mechanical work is inherited from the concepts of potentia absoluta and men's work, both implemented in the section IX of Daniel Bernoulli's Hydrodynamica in 1738. Nonetheless, Bernoulli did not confuse these two entities: he defined a link from gender to species between the former, which is general, and the latter, which is organic. In addition, Bernoulli clearly distinguished between vis viva and potentia absoluta (or work). Their reciprocal conversions are rarely mentioned explicitly in this book, except once, in the section X of his work, from vis viva to work, and subordinated to the mediation of a machine, in a driving forces substitution problem. His attitude evolved significantly in a text in 1753, in which work and vis viva were unambiguously connected, while the concept of potentia absoluta was reduced to that of human work, and the expression itself was abandoned. It was then accepted that work can be converted into vis viva, but the opposite is true in only one case, the intra-organic one. It is the concept of fatigue, seen as an expenditure of animal spirits themselves conceived of as little tensed springs releasing vis viva, that allowed the conversion, never quantified and listed simply as a model, from vis viva to work. Thus, work may have ultimately appeared as a transitional state between two kinds of vis viva, of which the first is non-quantifiable. At the same time, the natural elements were discredited from any hint of profitable production. Only men and animals were able to work in the strict sense of the word. Nature, left to itself, does not work, according to Bernoulli. In spite of his wish to bring together rational mechanics and practical mechanics, one perceived in the work of Bernoulli the subsistence of a rarely crossed disjunction between practical and theoretical fields.

  6. Experimental Quantum Randomness Processing Using Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Liu, Ke; Xu, Yuan; Wang, Weiting; Ma, Yuwei; Zhang, Fang; Yan, Zhaopeng; Vijay, R.; Sun, Luyan; Ma, Xiongfeng

    2016-07-01

    Coherently manipulating multipartite quantum correlations leads to remarkable advantages in quantum information processing. A fundamental question is whether such quantum advantages persist only by exploiting multipartite correlations, such as entanglement. Recently, Dale, Jennings, and Rudolph negated the question by showing that a randomness processing, quantum Bernoulli factory, using quantum coherence, is strictly more powerful than the one with classical mechanics. In this Letter, focusing on the same scenario, we propose a theoretical protocol that is classically impossible but can be implemented solely using quantum coherence without entanglement. We demonstrate the protocol by exploiting the high-fidelity quantum state preparation and measurement with a superconducting qubit in the circuit quantum electrodynamics architecture and a nearly quantum-limited parametric amplifier. Our experiment shows the advantage of using quantum coherence of a single qubit for information processing even when multipartite correlation is not present.

  7. Analysis of axial compressive loaded beam under random support excitations

    NASA Astrophysics Data System (ADS)

    Xiao, Wensheng; Wang, Fengde; Liu, Jian

    2017-12-01

    An analytical procedure to investigate the response spectrum of a uniform Bernoulli-Euler beam with axial compressive load subjected to random support excitations is implemented based on the Mindlin-Goodman method and the mode superposition method in the frequency domain. The random response spectrum of the simply supported beam subjected to white noise excitation and to Pierson-Moskowitz spectrum excitation is investigated, and the characteristics of the response spectrum are further explored. Moreover, the effect of axial compressive load is studied and a method to determine the axial load is proposed. The research results show that the response spectrum mainly consists of the beam's additional displacement response spectrum when the excitation is white noise; however, the quasi-static displacement response spectrum is the main component when the excitation is the Pierson-Moskowitz spectrum. Under white noise excitation, the amplitude of the power spectral density function decreased as the axial compressive load increased, while the frequency band of the vibration response spectrum increased with the increase of axial compressive load.

  8. Educational network comparative analysis of small groups: Short- and long-term communications

    NASA Astrophysics Data System (ADS)

    Berg, D. B.; Zvereva, O. M.; Nazarova, Yu. Yu.; Chepurov, E. G.; Kokovin, A. V.; Ranyuk, S. V.

    2017-11-01

    The present study is devoted to the discussion of small group communication network structures. These communications were observed in student groups, where actors were united with a regular educational activity. The comparative analysis was carried out for networks of short-term (1 hour) and long-term (4 weeks) communications, it was based on seven structural parameters, and consisted of two stages. At the first stage, differences between the network graphs were examined, and the random corresponding Bernoulli graphs were built. At the second stage, revealed differences were compared. Calculations were performed using UCINET software framework. It was found out that networks of long-term and short-term communications are quite different: the structure of a short-term communication network is close to a random one, whereas the most of long-term communication network parameters differ from the corresponding random ones by more than 30%. This difference can be explained by strong "noisiness" of a short-term communication network, and the lack of social in it.

  9. Euler and His Contribution Number Theory

    ERIC Educational Resources Information Center

    Len, Amy; Scott, Paul

    2004-01-01

    Born in 1707, Leonhard Euler was the son of a Protestant minister from the vicinity of Basel, Switzerland. With the aim of pursuing a career in theology, Euler entered the University of Basel at the age of thirteen, where he was tutored in mathematics by Johann Bernoulli (of the famous Bernoulli family of mathematicians). He developed an interest…

  10. Flawed Applications of Bernoulli's Principle

    ERIC Educational Resources Information Center

    Koumaras, Panagiotis; Primerakis, Georgios

    2018-01-01

    One of the most popular demonstration experiments pertaining to Bernoulli's principle is the production of a water spray by using a vertical plastic straw immersed in a glass of water and a horizontal straw to blow air towards the top edge of the vertical one. A more general version of this phenomenon, appearing also in school physics problems, is…

  11. The solution of transcendental equations

    NASA Technical Reports Server (NTRS)

    Agrawal, K. M.; Outlaw, R.

    1973-01-01

    Some of the existing methods to globally approximate the roots of transcendental equations namely, Graeffe's method, are studied. Summation of the reciprocated roots, Whittaker-Bernoulli method, and the extension of Bernoulli's method via Koenig's theorem are presented. The Aitken's delta squared process is used to accelerate the convergence. Finally, the suitability of these methods is discussed in various cases.

  12. Monitoring surgical and medical outcomes: the Bernoulli cumulative SUM chart. A novel application to assess clinical interventions

    PubMed Central

    Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A

    2005-01-01

    Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461

  13. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  14. Complementary Curves of Descent

    DTIC Science & Technology

    2012-11-16

    a lemniscate of Bernoulli . Alternatively, the wires can be tracks down which round objects undergo a rolling race. The level of presentation is...A common mechanics demonstration consists of racing cars or balls down tracks of various shapes and qualitatively or quantitatively measuring the...problem), which is self complementary. A striking example is a straight wire whose complement is a lemniscate of Bernoulli . Alternatively the wires can

  15. A Ritz approach for the static analysis of planar pantographic structures modeled with nonlinear Euler-Bernoulli beams

    NASA Astrophysics Data System (ADS)

    Andreaus, Ugo; Spagnuolo, Mario; Lekszycki, Tomasz; Eugster, Simon R.

    2018-04-01

    We present a finite element discrete model for pantographic lattices, based on a continuous Euler-Bernoulli beam for modeling the fibers composing the pantographic sheet. This model takes into account large displacements, rotations and deformations; the Euler-Bernoulli beam is described by using nonlinear interpolation functions, a Green-Lagrange strain for elongation and a curvature depending on elongation. On the basis of the introduced discrete model of a pantographic lattice, we perform some numerical simulations. We then compare the obtained results to an experimental BIAS extension test on a pantograph printed with polyamide PA2200. The pantographic structures involved in the numerical as well as in the experimental investigations are not proper fabrics: They are composed by just a few fibers for theoretically allowing the use of the Euler-Bernoulli beam theory in the description of the fibers. We compare the experiments to numerical simulations in which we allow the fibers to elastically slide one with respect to the other in correspondence of the interconnecting pivot. We present as result a very good agreement between the numerical simulation, based on the introduced model, and the experimental measures.

  16. A randomised approach for NARX model identification based on a multivariate Bernoulli distribution

    NASA Astrophysics Data System (ADS)

    Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.

    2017-04-01

    The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.

  17. Bernoulli Suction Effect on Soap Bubble Blowing?

    NASA Astrophysics Data System (ADS)

    Davidson, John; Ryu, Sangjin

    2015-11-01

    As a model system for thin-film bubble with two gas-liquid interfaces, we experimentally investigated the pinch-off of soap bubble blowing. Using the lab-built bubble blower and high-speed videography, we have found that the scaling law exponent of soap bubble pinch-off is 2/3, which is similar to that of soap film bridge. Because air flowed through the decreasing neck of soap film tube, we studied possible Bernoulli suction effect on soap bubble pinch-off by evaluating the Reynolds number of airflow. Image processing was utilized to calculate approximate volume of growing soap film tube and the volume flow rate of the airflow, and the Reynolds number was estimated to be 800-3200. This result suggests that soap bubbling may involve the Bernoulli suction effect.

  18. On the Lamb vector divergence as a momentum field diagnostic employed in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Hamman, Curtis W.; Kirby, Robert M.; Klewicki, Joseph C.

    2006-11-01

    Vorticity, enstrophy, helicity, and other derived field variables provide invaluable information about the kinematics and dynamics of fluids. However, whether or not derived field variables exist that intrinsically identify spatially localized motions having a distinct capacity to affect a time rate of change of linear momentum is seldom addressed in the literature. The purpose of the present study is to illustrate the unique attributes of the divergence of the Lamb vector in order to qualify its potential for characterizing such spatially localized motions. Toward this aim, we describe the mathematical properties, near-wall behavior, and scaling characteristics of the divergence of the Lamb vector for turbulent channel flow. When scaled by inner variables, the mean divergence of the Lamb vector merges to a single curve in the inner layer, and the fluctuating quantities exhibit a strong correlation with the Bernoulli function throughout much of the inner layer.

  19. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  20. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  1. Shape optimisation of an underwater Bernoulli gripper

    NASA Astrophysics Data System (ADS)

    Flint, Tim; Sellier, Mathieu

    2015-11-01

    In this work, we are interested in maximising the suction produced by an underwater Bernoulli gripper. Bernoulli grippers work by exploiting low pressure regions caused by the acceleration of a working fluid through a narrow channel, between the gripper and a surface, to provide a suction force. This mechanism allows for non-contact adhesion to various surfaces and may be used to hold a robot to the hull of a ship while it inspects welds for example. A Bernoulli type pressure analysis was used to model the system with a Darcy friction factor approximation to include the effects of frictional losses. The analysis involved a constrained optimisation in order to avoid cavitation within the mechanism which would result in decreased performance and damage to surfaces. A sensitivity based method and gradient descent approach was used to find the optimum shape of a discretised surface. The model's accuracy has been quantified against finite volume computational fluid dynamics simulation (ANSYS CFX) using the k- ω SST turbulence model. Preliminary results indicate significant improvement in suction force when compared to a simple geometry by retaining a pressure just above that at which cavitation would occur over as much surface area as possible. Doctoral candidate in the Mechanical Engineering Department of the University of Canterbury, New Zealand.

  2. Bending, longitudinal and torsional wave transmission on Euler-Bernoulli and Timoshenko beams with high propagation losses.

    PubMed

    Wang, X; Hopkins, C

    2016-10-01

    Advanced Statistical Energy Analysis (ASEA) is used to predict vibration transmission across coupled beams which support multiple wave types up to high frequencies where Timoshenko theory is valid. Bending-longitudinal and bending-torsional models are considered for an L-junction and rectangular beam frame. Comparisons are made with measurements, Finite Element Methods (FEM) and Statistical Energy Analysis (SEA). When beams support at least two local modes for each wave type in a frequency band and the modal overlap factor is at least 0.1, measurements and FEM have relatively smooth curves. Agreement between measurements, FEM, and ASEA demonstrates that ASEA is able to predict high propagation losses which are not accounted for with SEA. These propagation losses tend to become more important at high frequencies with relatively high internal loss factors and can occur when there is more than one wave type. At such high frequencies, Timoshenko theory, rather than Euler-Bernoulli theory, is often required. Timoshenko theory is incorporated in ASEA and SEA using wave theory transmission coefficients derived assuming Euler-Bernoulli theory, but using Timoshenko group velocity when calculating coupling loss factors. The changeover between theories is appropriate above the frequency where there is a 26% difference between Euler-Bernoulli and Timoshenko group velocities.

  3. Input reconstruction for networked control systems subject to deception attacks and data losses on control signals

    NASA Astrophysics Data System (ADS)

    Keller, J. Y.; Chabir, K.; Sauter, D.

    2016-03-01

    State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.

  4. Sampled-data H∞ filtering for Markovian jump singularly perturbed systems with time-varying delay and missing measurements

    NASA Astrophysics Data System (ADS)

    Yan, Yifang; Yang, Chunyu; Ma, Xiaoping; Zhou, Linna

    2018-02-01

    In this paper, sampled-data H∞ filtering problem is considered for Markovian jump singularly perturbed systems with time-varying delay and missing measurements. The sampled-data system is represented by a time-delay system, and the missing measurement phenomenon is described by an independent Bernoulli random process. By constructing an ɛ-dependent stochastic Lyapunov-Krasovskii functional, delay-dependent sufficient conditions are derived such that the filter error system satisfies the prescribed H∞ performance for all possible missing measurements. Then, an H∞ filter design method is proposed in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the feasibility and advantages of the obtained results.

  5. Robust reliable sampled-data control for switched systems with application to flight control

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.

    2016-11-01

    This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.

  6. Noise, chaos, and (ɛ, τ)-entropy per unit time

    NASA Astrophysics Data System (ADS)

    Gaspard, Pierre; Wang, Xiao-Jing

    1993-12-01

    The degree of dynamical randomness of different time processes is characterized in terms of the (ε, τ)-entropy per unit time. The (ε, τ)-entropy is the amount of information generated per unit time, at different scales τ of time and ε of the observables. This quantity generalizes the Kolmogorov-Sinai entropy per unit time from deterministic chaotic processes, to stochastic processes such as fluctuations in mesoscopic physico-chemical phenomena or strong turbulence in macroscopic spacetime dynamics. The random processes that are characterized include chaotic systems, Bernoulli and Markov chains, Poisson and birth-and-death processes, Ornstein-Uhlenbeck and Yaglom noises, fractional Brownian motions, different regimes of hydrodynamical turbulence, and the Lorentz-Boltzmann process of nonequilibrium statistical mechanics. We also extend the (ε, τ)-entropy to spacetime processes like cellular automata, Conway's game of life, lattice gas automata, coupled maps, spacetime chaos in partial differential equations, as well as the ideal, the Lorentz, and the hard sphere gases. Through these examples it is demonstrated that the (ε, τ)-entropy provides a unified quantitative measure of dynamical randomness to both chaos and noises, and a method to detect transitions between dynamical states of different degrees of randomness as a parameter of the system is varied.

  7. Modeling and Control of Intelligent Flexible Structures

    DTIC Science & Technology

    1994-03-26

    can be approximated as a simply supported beam in transverse vibration. Assuming that the Euler- Bernoulli beam assumptions hold, linear equations of...The assumptions made during the derivation are that the element can be modeled as an Euler- Bernoulli beam, that the cross-section is symmetric, and...parametes A,. and ,%. andc input maces 3,,. The closed loop system. ecuation (7), is stable when the 3.. 8 and output gain mantices H1., H., H. for

  8. Flawed Applications of Bernoulli's Principle

    NASA Astrophysics Data System (ADS)

    Koumaras, Panagiotis; Primerakis, Georgios

    2018-04-01

    One of the most popular demonstration experiments pertaining to Bernoulli's principle is the production of a water spray by using a vertical plastic straw immersed in a glass of water and a horizontal straw to blow air towards the top edge of the vertical one. A more general version of this phenomenon, appearing also in school physics problems, is the determination of the rise of the water level h in the straw (see Fig. 1).

  9. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  10. Symplectic analysis of vertical random vibration for coupled vehicle track systems

    NASA Astrophysics Data System (ADS)

    Lu, F.; Kennedy, D.; Williams, F. W.; Lin, J. H.

    2008-10-01

    A computational model for random vibration analysis of vehicle-track systems is proposed and solutions use the pseudo excitation method (PEM) and the symplectic method. The vehicle is modelled as a mass, spring and damping system with 10 degrees of freedom (dofs) which consist of vertical and pitching motion for the vehicle body and its two bogies and vertical motion for the four wheelsets. The track is treated as an infinite Bernoulli-Euler beam connected to sleepers and hence to ballast and is regarded as a periodic structure. Linear springs couple the vehicle and the track. Hence, the coupled vehicle-track system has only 26 dofs. A fixed excitation model is used, i.e. the vehicle does not move along the track but instead the track irregularity profile moves backwards at the vehicle velocity. This irregularity is assumed to be a stationary random process. Random vibration theory is used to obtain the response power spectral densities (PSDs), by using PEM to transform this random multiexcitation problem into a deterministic harmonic excitation one and then applying symplectic solution methodology. Numerical results for an example include verification of the proposed method by comparing with finite element method (FEM) results; comparison between the present model and the traditional rigid track model and; discussion of the influences of track damping and vehicle velocity.

  11. Local Stretching Theories

    DTIC Science & Technology

    2010-06-24

    diffusivity of the scalar. (If the scalar is heat, then the Schmidt number becomes the Prandtl number.) Momentum diffuses significantly faster than the...derive the Cramér function explicitly in the simple case where the xi have a Bernoulli distribution, though the general formula for S may be derived by...an analogous procedure. 5 Large deviation CLT for the Bernoulli distribution Let xi have the PDF of a fair coin, p(xi) = 1 2δ(xi + 1) + 1 2δ(xi − 1

  12. The general solution to the classical problem of finite Euler Bernoulli beam

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Amba-Rao, C. L.

    1977-01-01

    An analytical solution is obtained for the problem of free and forced vibrations of a finite Euler Bernoulli beam with arbitrary (partially fixed) boundary conditions. The effects of linear viscous damping, Winkler foundation, constant axial tension, a concentrated mass, and an arbitrary forcing function are included in the analysis. No restriction is placed on the values of the parameters involved, and the solution presented here contains all cited previous solutions as special cases.

  13. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian

    2011-01-01

    Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071

  14. State dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations and threshold

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.

    2017-11-01

    The objective of this paper is to analyse state dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations, threshold and constant retrial policy. Primary customers are arriving into the system in bulk with different arrival rates λ a and λ b . If arriving customers find the server is busy then the entire batch will join to orbit. Customer from orbit request service one by one with constant retrial rate γ. On the other hand if an arrival of customers finds the server is idle then customers will be served in batches according to general bulk service rule. After service completion, customers may request service again with probability δ as feedback or leave from the system with probability 1 - δ. In the service completion epoch, if the orbit size is zero then the server leaves for multiple vacations. The server continues the vacation until the orbit size reaches the value ‘N’ (N > b). At the vacation completion, if the orbit size is ‘N’ then the server becomes ready to provide service for customers from the main pool or from the orbit. For the designed queueing model, probability generating function of the queue size at an arbitrary time will be obtained by using supplementary variable technique. Various performance measures will be derived with suitable numerical illustrations.

  15. Recent Selected Papers of Northwestern Polytechnical University in Two Parts. Part I. 1979.

    DTIC Science & Technology

    1981-08-20

    pressure coefficient is calculated by the exact Bernoulli equation. Two numerical examples are included, and the results agree fairly well with known... Bernoulli equation is applied to cal- culate the pressure coefficient: ft. 2 2, Lq32) In the above expression, all the derivatives are calculated by...0.14: R Olt(6) The real part on the unit circle in ecuation (2) is used. Making use of equations (5) and (6), both sides of equation (2) are expanded

  16. The sampled-data consensus of multi-agent systems with probabilistic time-varying delays and packet losses

    NASA Astrophysics Data System (ADS)

    Sui, Xin; Yang, Yongqing; Xu, Xianyun; Zhang, Shuai; Zhang, Lingzhong

    2018-02-01

    This paper investigates the consensus of multi-agent systems with probabilistic time-varying delays and packet losses via sampled-data control. On the one hand, a Bernoulli-distributed white sequence is employed to model random packet losses among agents. On the other hand, a switched system is used to describe packet dropouts in a deterministic way. Based on the special property of the Laplacian matrix, the consensus problem can be converted into a stabilization problem of a switched system with lower dimensions. Some mean square consensus criteria are derived in terms of constructing an appropriate Lyapunov function and using linear matrix inequalities (LMIs). Finally, two numerical examples are given to show the effectiveness of the proposed method.

  17. Social capital calculations in economic systems: Experimental study

    NASA Astrophysics Data System (ADS)

    Chepurov, E. G.; Berg, D. B.; Zvereva, O. M.; Nazarova, Yu. Yu.; Chekmarev, I. V.

    2017-11-01

    The paper describes the social capital study for a system where actors are engaged in an economic activity. The focus is on the analysis of communications structural parameters (transactions) between the actors. Comparison between transaction network graph structure and the structure of a random Bernoulli graph of the same dimension and density allows revealing specific structural features of the economic system under study. Structural analysis is based on SNA-methodology (SNA - Social Network Analysis). It is shown that structural parameter values of the graph formed by agent relationship links may well characterize different aspects of the social capital structure. The research advocates that it is useful to distinguish the difference between each agent social capital and the whole system social capital.

  18. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  19. Normal-Gamma-Bernoulli Peak Detection for Analysis of Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry Data.

    PubMed

    Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang

    2017-01-01

    Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.

  20. A new fractional nonlocal model and its application in free vibration of Timoshenko and Euler-Bernoulli beams

    NASA Astrophysics Data System (ADS)

    Rahimi, Zaher; Sumelka, Wojciech; Yang, Xiao-Jun

    2017-11-01

    The application of fractional calculus in fractional models (FMs) makes them more flexible than integer models inasmuch they can conclude all of integer and non-integer operators. In other words FMs let us use more potential of mathematics to modeling physical phenomena due to the use of both integer and fractional operators to present a better modeling of problems, which makes them more flexible and powerful. In the present work, a new fractional nonlocal model has been proposed, which has a simple form and can be used in different problems due to the simple form of numerical solutions. Then the model has been used to govern equations of the motion of the Timoshenko beam theory (TBT) and Euler-Bernoulli beam theory (EBT). Next, free vibration of the Timoshenko and Euler-Bernoulli simply-supported (S-S) beam has been investigated. The Galerkin weighted residual method has been used to solve the non-linear governing equations.

  1. Forecasting of foreign exchange rates of Taiwan’s major trading partners by novel nonlinear Grey Bernoulli model NGBM(1, 1)

    NASA Astrophysics Data System (ADS)

    Chen, Chun-I.; Chen, Hong Long; Chen, Shuo-Pei

    2008-08-01

    The traditional Grey Model is easy to understand and simple to calculate, with satisfactory accuracy, but it is also lack of flexibility to adjust the model to acquire higher forecasting precision. This research studies feasibility and effectiveness of a novel Grey model together with the concept of the Bernoulli differential equation in ordinary differential equation. In this research, the author names this newly proposed model as Nonlinear Grey Bernoulli Model (NGBM). The NGBM is nonlinear differential equation with power index n. By controlling n, the curvature of the solution curve could be adjusted to fit the result of one time accumulated generating operation (1-AGO) of raw data. One extreme case from Grey system textbook is studied by NGBM, and two published articles are chosen for practical tests of NGBM. The results prove the novel NGBM is feasible and efficient. Finally, NGBM is used to forecast 2005 foreign exchange rates of twelve Taiwan major trading partners, including Taiwan.

  2. "Astronomica" in the Correspondence between Leonhard Euler and Daniel Bernoull (German Title: "Astronomica" im Briefwechsel zwischen Leonhard Euler und Daniel Bernoulli)

    NASA Astrophysics Data System (ADS)

    Verdun, Andreas

    2010-12-01

    The Euler Commission of the Swiss Academy of Sciences intends to terminate the edition of Leonhard Euler's works in the next year 2011 after nearly one hundred years since the beginning of the editorial works. These works include, e.g., Volume 3 of the Series quarta A which will contain the correspondence between Leonhard Euler (1707-1783) and Daniel Bernoulli (1700-1783) and which is currently being edited by Dr. Emil A. Fellmann (Basel) and Prof. Dr. Gleb K. Mikhailov (Moscow). This correspondence contains more than hundred letters, principally from Daniel Bernoulli to Euler. Parts of this correspondence were published uncommented already in 1843. It is astonishing that, apart from mathematics and physics (mainly mechanics and hydrodynamics), many topics addressed concern astronomy. The major part of the preserved correspondence between Euler and Daniel Bernoulli, in which astronomical themes are discussed, concerns celestial mechanics as the dominant discipline of theoretical astronomy of the eighteenth century. It was triggered and coined mainly by the prize questions of the Paris Academy of Science. In more than two thirds of the letters current problems and questions concerning celestial mechanics of that time are treated, focusing on the lunar theory and the great inequality in the motions of Jupiter and Saturn as special applications of the three body problem. In the remaining letters, problems concerning spherical astronomy are solved and attempts are made to explain certain phenomena in the field of "cosmic physics" concerning astronomical observations.

  3. Numerical simulations of katabatic jumps in coats land, Antartica

    NASA Astrophysics Data System (ADS)

    Yu, Ye; Cai, Xiaoming; King, John C.; Renfrew, Ian A.

    A non-hydrostatic numerical model, the Regional Atmospheric Modeling System (RAMS), has been used to investigate the development of katabatic jumps in Coats Land, Antarctica. In the control run with a 5 m s-1downslope directed initial wind, a katabatic jump develops near the foot of the idealized slope. The jump is manifested as a rapid deceleration of the downslope flow and a change from supercritical to subcritical flow, in a hydraulic sense, i.e., the Froude number (Fr) of the flow changes from Fr > 1 to Fr> 1. Results from sensitivity experiments show that an increase in the upstream flow rate strengthens the jump, while an increase in the downstream inversion-layer depth results in a retreat of the jump. Hydraulic theory and Bernoulli''s theorem have been used to explain the surface pressure change across the jump. It is found that hydraulic theory always underestimates the surface pressure change, while Bernoulli''s theorem provides a satisfactory estimation. An analysis of the downs balance for the katabatic jump indicates that the important forces are those related to the pressure gradient, advection and, to a lesser extent, the turbulent momentum divergence. The development of katabatic jumps can be divided into two phases. In phase I, the t gradient force is nearly balanced by advection, while in phase II, the pressure gradient force is counterbalanced by turbulent momentum divergence. The upslope pressure gradient force associated with a pool of cold air over the ice shelf facilitates the formation of the katabatic jump.

  4. Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes

    NASA Astrophysics Data System (ADS)

    Farzan Sabahi, Mohammad; Dehghanfard, Ali

    2014-12-01

    The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.

  5. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  6. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  7. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  8. Divergence instability of pipes conveying fluid with uncertain flow velocity

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh

    2018-02-01

    This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.

  9. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials.

    PubMed

    Mashayekhi, S; Razzaghi, M; Tripak, O

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  10. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials

    PubMed Central

    Mashayekhi, S.; Razzaghi, M.; Tripak, O.

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

  11. Regarding on the prototype solutions for the nonlinear fractional-order biological population model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan

    2016-06-08

    In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.

  12. Bernoulli-Langevin Wind Speed Model for Simulation of Storm Events

    NASA Astrophysics Data System (ADS)

    Fürstenau, Norbert; Mittendorf, Monika

    2016-12-01

    We present a simple nonlinear dynamics Langevin model for predicting the instationary wind speed profile during storm events typically accompanying extreme low-pressure situations. It is based on a second-degree Bernoulli equation with δ-correlated Gaussian noise and may complement stationary stochastic wind models. Transition between increasing and decreasing wind speed and (quasi) stationary normal wind and storm states are induced by the sign change of the controlling time-dependent rate parameter k(t). This approach corresponds to the simplified nonlinear laser dynamics for the incoherent to coherent transition of light emission that can be understood by a phase transition analogy within equilibrium thermodynamics [H. Haken, Synergetics, 3rd ed., Springer, Berlin, Heidelberg, New York 1983/2004.]. Evidence for the nonlinear dynamics two-state approach is generated by fitting of two historical wind speed profiles (low-pressure situations "Xaver" and "Christian", 2013) taken from Meteorological Terminal Air Report weather data, with a logistic approximation (i.e. constant rate coefficients k) to the solution of our dynamical model using a sum of sigmoid functions. The analytical solution of our dynamical two-state Bernoulli equation as obtained with a sinusoidal rate ansatz k(t) of period T (=storm duration) exhibits reasonable agreement with the logistic fit to the empirical data. Noise parameter estimates of speed fluctuations are derived from empirical fit residuals and by means of a stationary solution of the corresponding Fokker-Planck equation. Numerical simulations with the Bernoulli-Langevin equation demonstrate the potential for stochastic wind speed profile modeling and predictive filtering under extreme storm events that is suggested for applications in anticipative air traffic management.

  13. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  14. The stochastic model for ternary and quaternary alloys: Application of the Bernoulli relation to the phonon spectra of mixed crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchewka, M., E-mail: marmi@ur.edu.pl; Woźny, M.; Polit, J.

    2014-03-21

    To understand and interpret the experimental data on the phonon spectra of the solid solutions, it is necessary to describe mathematically the non-regular distribution of atoms in their lattices. It appears that such description is possible in case of the strongly stochastically homogenous distribution which requires a great number of atoms and very carefully mixed alloys. These conditions are generally fulfilled in case of high quality homogenous semiconductor solid solutions of the III–V and II–VI semiconductor compounds. In this case, we can use the Bernoulli relation describing probability of the occurrence of one n equivalent event which can be applied,more » to the probability of finding one from n configurations in the solid solution lattice. The results described in this paper for ternary HgCdTe and GaAsP as well as quaternary ZnCdHgTe can provide an affirmative answer to the question: whether stochastic geometry, e.g., the Bernoulli relation, is enough to describe the observed phonon spectra.« less

  15. The stochastic model for ternary and quaternary alloys: Application of the Bernoulli relation to the phonon spectra of mixed crystals

    NASA Astrophysics Data System (ADS)

    Marchewka, M.; Woźny, M.; Polit, J.; Kisiel, A.; Robouch, B. V.; Marcelli, A.; Sheregii, E. M.

    2014-03-01

    To understand and interpret the experimental data on the phonon spectra of the solid solutions, it is necessary to describe mathematically the non-regular distribution of atoms in their lattices. It appears that such description is possible in case of the strongly stochastically homogenous distribution which requires a great number of atoms and very carefully mixed alloys. These conditions are generally fulfilled in case of high quality homogenous semiconductor solid solutions of the III-V and II-VI semiconductor compounds. In this case, we can use the Bernoulli relation describing probability of the occurrence of one n equivalent event which can be applied, to the probability of finding one from n configurations in the solid solution lattice. The results described in this paper for ternary HgCdTe and GaAsP as well as quaternary ZnCdHgTe can provide an affirmative answer to the question: whether stochastic geometry, e.g., the Bernoulli relation, is enough to describe the observed phonon spectra.

  16. A note on implementation of decaying product correlation structures for quasi-least squares.

    PubMed

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Sonic-boom minimization.

    NASA Technical Reports Server (NTRS)

    Seebass, R.; George, A. R.

    1972-01-01

    There have been many attempts to reduce or eliminate the sonic boom. Such attempts fall into two categories: (1) aerodynamic minimization and (2) exotic configurations. In the first category changes in the entropy and the Bernoulli constant are neglected and equivalent body shapes required to minimize the overpressure, the shock pressure rise and the impulse are deduced. These results include the beneficial effects of atmospheric stratification. In the second category, the effective length of the aircraft is increased or its base area decreased by modifying the Bernoulli constant a significant fraction of the flow past the aircraft. A figure of merit is introduced which makes it possible to judge the effectiveness of the latter schemes.

  18. Evaluation of aerodynamic characteristics of a coupled fluid-structure system using generalized Bernoulli’s principle: An application to vocal folds vibration

    PubMed Central

    Zhang, Lucy T.; Yang, Jubiao

    2017-01-01

    In this work we explore the aerodynamics flow characteristics of a coupled fluid-structure interaction system using a generalized Bernoulli equation derived directly from the Cauchy momentum equations. Unlike the conventional Bernoulli equation where incompressible, inviscid, and steady flow conditions are assumed, this generalized Bernoulli equation includes the contributions from compressibility, viscous, and unsteadiness, which could be essential in defining aerodynamic characteristics. The application of the derived Bernoulli’s principle is on a fully-coupled fluid-structure interaction simulation of the vocal folds vibration. The coupled system is simulated using the immersed finite element method where compressible Navier-Stokes equations are used to describe the air and an elastic pliable structure to describe the vocal fold. The vibration of the vocal fold works to open and close the glottal flow. The aerodynamics flow characteristics are evaluated using the derived Bernoulli’s principles for a vibration cycle in a carefully partitioned control volume based on the moving structure. The results agree very well to experimental observations, which validate the strategy and its use in other types of flow characteristics that involve coupled fluid-structure interactions. PMID:29527541

  19. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  20. DichotomY IdentitY: Euler-Bernoulli Numbers, Sets-Multisets, FD-BE Quantum-Statistics, 1 /f0 - 1 /f1 Power-Spectra, Ellipse-Hyperbola Conic-Sections, Local-Global Extent: ``Category-Semantics''

    NASA Astrophysics Data System (ADS)

    Rota, G.-C.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Seminal Apostol[Math.Mag.81,3,178(08);Am.Math.Month.115,9,795(08)]-Rota[Intro.Prob. Thy.(95)-p.50-55] DichotomY equivalence-class: set-theory: sets V multisets; closed V open; to Abromowitz-Stegun[Hdbk.Math.Fns.(64)]-ch.23,p.803!]: numbers/polynomials generating-functions: Euler V Bernoulli; to Siegel[Schrodinger Cent.Symp.(87); Symp.Fractals, MRS Fall Mtg.,(1989)-5-papers!] power-spectrum: 1/ f {0}-White V 1/ f {1}-Zipf/Pink (Archimedes) HYPERBOLICITY INEVITABILITY; to analytic-geometry Conic-Sections: Ellipse V (via Parabola) V Hyperbola; to Extent/Scale/Radius: Locality V Globality, Root-Causes/Ultimate-Origins: Dimensionality: odd-Z V (via fractal) V even-Z, to Symmetries/(Noether's-theorem connected)/Conservation-Laws Dichotomy: restored/conservation/convergence=0- V broken/non-conservation/divergence=/=0: with asymptotic-limit antipodes morphisms/ crossovers: Eureka!!!; "FUZZYICS"=''CATEGORYICS''!!! Connection to Kummer(1850) Bernoulli-numbers proof of FLT is via Siegel(CCNY;1964) < (1994)[AMS Joint Mtg. (2002)-Abs.973-60-124] short succinct physics proof: FLT = Least-Action Principle!!!

  1. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  2. Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.

    PubMed

    Yamazaki, Keisuke; Kaji, Daisuke

    2013-08-01

    Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Management of colon stents based on Bernoulli's principle.

    PubMed

    Uno, Yoshiharu

    2017-03-01

    The colonic self-expanding metal stent (SEMS) has been widely used for "bridge to surgery" and palliative therapy. However, if the spread of SEMS is insufficient, not only can a decompression effect not be obtained but also perforation and obstructive colitis can occur. The mechanism of occurrence of obstructive colitis and perforation was investigated by flow dynamics. Bernoulli's principle was applied, assuming that the cause of inflammation and perforation represented the pressure difference in the proximal lumen and stent. The variables considered were proximal lumen diameter, stent lumen diameter, flow rate into the proximal lumen, and fluid density. To model the right colon, the proximal lumen diameter was set at 50 mm. To model the left-side colon, the proximal lumen diameter was set at 30 mm. For both the right colon model and the left-side colon model, the difference in pressure between the proximal lumen and the stent was less than 20 mmHg, when the diameter of the stent lumen was 14 mm or more. Both the right colon model and the left-side colon model were 30 mmHg or more at 200 mL s -1 when the stent lumen was 10 mm or less. Even with an inflow rate of 90-110 mL s -1 , the pressure was 140 mmHg when the stent lumen diameter was 5 mm. In theory, in order to maintain the effectiveness of SEMS, it is necessary to keep the diameter of the stent lumen at 14 mm or more.

  4. A fully Sinc-Galerkin method for Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.; Lund, J.

    1990-01-01

    A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.

  5. Molecular analyses of the principal components of response strength.

    PubMed Central

    Killeen, Peter R; Hall, Scott S; Reilly, Mark P; Kettle, Lauren C

    2002-01-01

    Killeen and Hall (2001) showed that a common factor called strength underlies the key dependent variables of response probability, latency, and rate, and that overall response rate is a good predictor of strength. In a search for the mechanisms that underlie those correlations, this article shows that (a) the probability of responding on a trial is a two-state Markov process; (b) latency and rate of responding can be described in terms of the probability and period of stochastic machines called clocked Bernoulli modules, and (c) one such machine, the refractory Poisson process, provides a functional relation between the probability of observing a response during any epoch and the rate of responding. This relation is one of proportionality at low rates and curvilinearity at higher rates. PMID:12216975

  6. Sign reversals of the output autocorrelation function for the stochastic Bernoulli-Verhulst equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumi, N., E-mail: Neeme.Lumi@tlu.ee; Mankin, R., E-mail: Romi.Mankin@tlu.ee

    2015-10-28

    We consider a stochastic Bernoulli-Verhulst equation as a model for population growth processes. The effect of fluctuating environment on the carrying capacity of a population is modeled as colored dichotomous noise. Relying on the composite master equation an explicit expression for the stationary autocorrelation function (ACF) of population sizes is found. On the basis of this expression a nonmonotonic decay of the ACF by increasing lag-time is shown. Moreover, in a certain regime of the noise parameters the ACF demonstrates anticorrelation as well as related sign reversals at some values of the lag-time. The conditions for the appearance of thismore » highly unexpected effect are also discussed.« less

  7. Theory of the Maxwell pressure tensor and the tension in a water bridge.

    PubMed

    Widom, A; Swain, J; Silverberg, J; Sivasubramanian, S; Srivastava, Y N

    2009-07-01

    A water bridge refers to an experimental "flexible cable" made up of pure de-ionized water, which can hang across two supports maintained with a sufficiently large voltage difference. The resulting electric fields within the de-ionized water flexible cable maintain a tension that sustains the water against the downward force of gravity. A detailed calculation of the water bridge tension will be provided in terms of the Maxwell pressure tensor in a dielectric fluid medium. General properties of the dielectric liquid pressure tensor are discussed along with unusual features of dielectric fluid Bernoulli flows in an electric field. The "frictionless" Bernoulli flow is closely analogous to that of a superfluid.

  8. The influence of inertia on the efflux velocity: From Daniel Bernoulli to a contemporary theory

    NASA Astrophysics Data System (ADS)

    Malcherek, Andreas

    2015-11-01

    In 1644 Evangelista Torricelli claimed that the outflow velocity from a vessel is equal to the terminal speed of a body falling freely from the filling level h, i.e. v =√{ 2 gh } . Therefore the largest velocities are predicted when the height in a vessel is at the highest position. As a consequence the efflux would start with the highest velocity directly from the initiation of motion which contradicts the inertia principle. In 1738 Daniel Bernoulli derived a much more sophisticated and instationary outflow theory basing on the conservation of potential and kinetic energy. As a special case Torricelli's law is obtained, when inertia is neglected and the cross section of the opening is small compared to the vessel's cross section. To the Authors knowledge, this theory was never applied or even mentioned in text books although it is superior to the Torricelli theory in many aspects. In this paper Bernoulli's forgotten theory will be presented. Deriving this theory using the state of the arts hydrodynamics results in a new formula v =√{ gh } . Although this formula contradicts Torricelli's principle, it is confirmed by all kind of experiments stating that a discharge coefficient of about β = 0 . 7 is needed in Torricelli's formula v = β√{ 2 gh } .

  9. Vehicle lateral motion regulation under unreliable communication links based on robust H∞ output-feedback control schema

    NASA Astrophysics Data System (ADS)

    Li, Cong; Jing, Hui; Wang, Rongrong; Chen, Nan

    2018-05-01

    This paper presents a robust control schema for vehicle lateral motion regulation under unreliable communication links via controller area network (CAN). The communication links between the system plant and the controller are assumed to be imperfect and therefore the data packet dropouts occur frequently. The paper takes the form of parallel distributed compensation and treats the dropouts as random binary numbers that form Bernoulli distribution. Both of the tire cornering stiffness uncertainty and external disturbances are considered to enhance the robustness of the controller. In addition, a robust H∞ static output-feedback control approach is proposed to realize the lateral motion control with relative low cost sensors. The stochastic stability of the closed-loop system and conservation of the guaranteed H∞ performance are investigated. Simulation results based on CarSim platform using a high-fidelity and full-car model verify the effectiveness of the proposed control approach.

  10. Visual evoked potentials and selective attention to points in space

    NASA Technical Reports Server (NTRS)

    Van Voorhis, S.; Hillyard, S. A.

    1977-01-01

    Visual evoked potentials (VEPs) were recorded to sequences of flashes delivered to the right and left visual fields while subjects responded promptly to designated stimuli in one field at a time (focused attention), in both fields at once (divided attention), or to neither field (passive). Three stimulus schedules were used: the first was a replication of a previous study (Eason, Harter, and White, 1969) where left- and right-field flashes were delivered quasi-independently, while in the other two the flashes were delivered to the two fields in random order (Bernoulli sequence). VEPs to attended-field stimuli were enhanced at both occipital (O2) and central (Cz) recording sites under all stimulus sequences, but different components were affected at the two scalp sites. It was suggested that the VEP at O2 may reflect modality-specific processing events, while the response at Cz, like its auditory homologue, may index more general aspects of selective attention.

  11. Physical Watermarking for Securing Cyber-Physical Systems via Packet Drop Injections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozel, Omur; Weekrakkody, Sean; Sinopoli, Bruno

    Physical watermarking is a well known solution for detecting integrity attacks on Cyber-Physical Systems (CPSs) such as the smart grid. Here, a random control input is injected into the system in order to authenticate physical dynamics and sensors which may have been corrupted by adversaries. Packet drops may naturally occur in a CPS due to network imperfections. To our knowledge, previous work has not considered the role of packet drops in detecting integrity attacks. In this paper, we investigate the merit of injecting Bernoulli packet drops into the control inputs sent to actuators as a new physical watermarking scheme. Withmore » the classical linear quadratic objective function and an independent and identically distributed packet drop injection sequence, we study the effect of packet drops on meeting security and control objectives. Our results indicate that the packet drops could act as a potential physical watermark for attack detection in CPSs.« less

  12. Analytical study of sandwich structures using Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.

    2017-01-01

    This paper presents an analytical study of sandwich structures. In this study, the Euler-Bernoulli beam equation is solved analytically for a four-point bending problem. Appropriate initial and boundary conditions are specified to enclose the problem. In addition, the balance coefficient is calculated and the Rule of Mixtures is applied. The focus of this study is to determine the effective material properties and geometric features such as the moment of inertia of a sandwich beam. The effective parameters help in the development of a generic analytical correlation for complex sandwich structures from the perspective of four-point bending calculations. The main outcomes of these analytical calculations are the lateral displacements and longitudinal stresses for each particular material in the sandwich structure.

  13. Agradient velocity, vortical motion and gravity waves in a rotating shallow-water model

    NASA Astrophysics Data System (ADS)

    Sutyrin Georgi, G.

    2004-07-01

    A new approach to modelling slow vortical motion and fast inertia-gravity waves is suggested within the rotating shallow-water primitive equations with arbitrary topography. The velocity is exactly expressed as a sum of the gradient wind, described by the Bernoulli function,B, and the remaining agradient part, proportional to the velocity tendency. Then the equation for inverse potential vorticity,Q, as well as momentum equations for agradient velocity include the same source of intrinsic flow evolution expressed as a single term J (B, Q), where J is the Jacobian operator (for any steady state J (B, Q) = 0). Two components of agradient velocity are responsible for the fast inertia-gravity wave propagation similar to the traditionally used divergence and ageostrophic vorticity. This approach allows for the construction of balance relations for vortical dynamics and potential vorticity inversion schemes even for moderate Rossby and Froude numbers assuming the characteristic value of |J(B, Q)| = to be small. The components of agradient velocity are used as the fast variables slaved to potential vorticity that allows for diagnostic estimates of the velocity tendency, the direct potential vorticity inversion with the accuracy of 2 and the corresponding potential vorticity-conserving agradient velocity balance model (AVBM). The ultimate limitations of constructing the balance are revealed in the form of the ellipticity condition for balanced tendency of the Bernoulli function which incorporates both known criteria of the formal stability: the gradient wind modified by the characteristic vortical Rossby wave phase speed should be subcritical. The accuracy of the AVBM is illustrated by considering the linear normal modes and coastal Kelvin waves in the f-plane channel with topography.

  14. Saltwater Intrusion Through Submerged Caves due to the Venturi Effect

    NASA Astrophysics Data System (ADS)

    Khazmutdinova, K.; Nof, D.

    2016-12-01

    Saltwater intrusion into freshwater sources is a concern in coastal areas. In order to reduce the intrusion of seawater the physical mechanisms that allow this to occur must be understood. This study presents an approach to quantify saltwater intrusion in karstic coastal aquifers due to the presence of submerged caves. Many water-filled caves have variable tunnel cross-sections and often have narrow connections between two otherwise large tunnels. Generally, the diameter of these restrictions is 1 - 2 m and the flow speed within them is approximately 1 - 5 m/s. Main cave tunnels can be 10 - 20 times bigger than restrictions, and have flow speeds ranging anywhere between 0.5 cm/s and 20 cm/s. According to Bernoulli's theorem, in order to balance high velocities within a restriction, the pressure has to drop as the water flow passes through a narrow tunnel. This is expected to influence the height to which a deeper saline aquifer can penetrate in conduits connecting the narrow restriction and saltwater. For sufficiently small restrictions, saline water can invade the freshwater tunnel. The intrusion of saltwater from a deeper, saline aquifer into a fresh groundwater system due to the Venturi effect in submerged caves was computed, and an analytical and a qualitative model that captures saltwater intrusion into a fresh aquifer was developed. Using Bernoulli's theorem, we show that depths from which the saline water can be drawn into the freshwater tunnel reach up to 450 m depending on the difference in the density between fresh and saltwater. The velocity of the saline upward flow is estimated to be 1.4 m/s using the parameters for Wakulla Spring, a first order magnitude spring in Florida, with a saltwater interface 180 m below the spring cave system.

  15. A new equilibrium torus solution and GRMHD initial conditions

    NASA Astrophysics Data System (ADS)

    Penna, Robert F.; Kulkarni, Akshay; Narayan, Ramesh

    2013-11-01

    Context. General relativistic magnetohydrodynamic (GRMHD) simulations are providing influential models for black hole spin measurements, gamma ray bursts, and supermassive black hole feedback. Many of these simulations use the same initial condition: a rotating torus of fluid in hydrostatic equilibrium. A persistent concern is that simulation results sometimes depend on arbitrary features of the initial torus. For example, the Bernoulli parameter (which is related to outflows), appears to be controlled by the Bernoulli parameter of the initial torus. Aims: In this paper, we give a new equilibrium torus solution and describe two applications for the future. First, it can be used as a more physical initial condition for GRMHD simulations than earlier torus solutions. Second, it can be used in conjunction with earlier torus solutions to isolate the simulation results that depend on initial conditions. Methods: We assume axisymmetry, an ideal gas equation of state, constant entropy, and ignore self-gravity. We fix an angular momentum distribution and solve the relativistic Euler equations in the Kerr metric. Results: The Bernoulli parameter, rotation rate, and geometrical thickness of the torus can be adjusted independently. Our torus tends to be more bound and have a larger radial extent than earlier torus solutions. Conclusions: While this paper was in preparation, several GRMHD simulations appeared based on our equilibrium torus. We believe it will continue to provide a more realistic starting point for future simulations.

  16. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  17. On the Propagation of Nonlinear Acoustic Waves in Viscous and Thermoviscous Fluids

    DTIC Science & Technology

    2012-01-01

    continuity and momentum equations, which in 1D reduce to ϱt + uϱx + ϱux = 0, (6) ϱ(ut + uux) = −℘x +  4 3 µ + µB  uxx, (7) respectively, recalling that all...1F ′ −  1 − v2n v3−2nn  F = ϵβF 2 (n = 0, 1), (14) i.e., the associated ODEs of the former and latter are Bernoulli equations. Integrating these...12), are of the Bernoulli type, namely, (Red)−1F ′ −  1 − v2n vn  F = ϵ  1 2 n +  β − 1 2 n  v2n  F 2, (20) which when integrated yield the

  18. Euler polynomials and identities for non-commutative operators

    NASA Astrophysics Data System (ADS)

    De Angelis, Valerio; Vignat, Christophe

    2015-12-01

    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.

  19. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  20. Inadequate child supervision: The role of alcohol outlet density, parent drinking behaviors, and social support

    PubMed Central

    Freisthler, Bridget; Johnson-Motoyama, Michelle; Kepple, Nancy J.

    2014-01-01

    Supervisory neglect, or the failure of a caregiver to appropriately supervise a child, is one of the predominant types of neglectful behaviors, with alcohol use being considered a key antecedent to inadequate supervision of children. The current study builds on previous work by examining the role of parental drinking and alcohol outlet densities while controlling for caregiver and child characteristics. Data were obtained from 3,023 participants via a telephone survey from 50 cities throughout California. The telephone survey included items on neglectful parenting practices, drinking behaviors, and socio-demographic characteristics. Densities of alcohol outlets were measured for each of the 202 zip codes in the study. Multilevel Bernoulli models were used to analyze the relationship between four supervisory neglect parenting practices and individual-level and zip code-level variables. In our study, heavy drinking was only significantly related to one of our four outcome variables (leaving a child where he or she may not be safe). The density of on premise alcohol outlets was positively related to leaving a child home alone when an adult should be present. This study demonstrates that discrete relationships exist between alcohol related variables, social support, and specific supervisory neglect subtypes at the ecological and individual levels. PMID:25061256

  1. Shapes of Bubbles and Drops in Motion.

    ERIC Educational Resources Information Center

    O'Connell, James

    2000-01-01

    Explains the shape distortions that take place in fluid packets (bubbles or drops) with steady flow motion by using the laws of Archimedes, Pascal, and Bernoulli rather than advanced vector calculus. (WRM)

  2. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  3. Bernoulli's Challenge

    NASA Astrophysics Data System (ADS)

    Bouffard, Karen

    1999-01-01

    This month's Olympic activity was brought to the Eastern Massachusetts Physics Olympics group by Ron DeFronzo of Pawtucket, Rhode Island. Using a hair dryer, contestants must maneuver a Ping-Pong ball into a three-dimensional "bullseye" target.

  4. Singing Corrugated Pipes.

    ERIC Educational Resources Information Center

    Cadwell, Louis H.

    1994-01-01

    This article describes different techniques used to measure air flow velocity. The two methods used were Crawford's Wastebasket and a video camera. The results were analyzed and compared to the air flow velocity predicted by Bernoulli's principle. (ZWH)

  5. Free vibration analysis of microtubules based on the molecular mechanics and continuum beam theory.

    PubMed

    Zhang, Jin; Wang, Chengyuan

    2016-10-01

    A molecular structural mechanics (MSM) method has been implemented to investigate the free vibration of microtubules (MTs). The emphasis is placed on the effects of the configuration and the imperfect boundaries of MTs. It is shown that the influence of protofilament number on the fundamental frequency is strong, while the effect of helix-start number is almost negligible. The fundamental frequency is also found to decrease as the number of the blocked filaments at boundaries decreases. Subsequently, the Euler-Bernoulli beam theory is employed to reveal the physics behind the simulation results. Fitting the Euler-Bernoulli beam into the MSM data leads to an explicit formula for the fundamental frequency of MTs with various configurations and identifies a possible correlation between the imperfect boundary conditions and the length-dependent bending stiffness of MTs reported in experiments.

  6. Nonlinear vocal fold dynamics resulting from asymmetric fluid loading on a two-mass model of speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron D.; Zañartu, Matías; Peterson, Sean D.; Plesniak, Michael W.

    2011-09-01

    Nonlinear vocal fold dynamics arising from asymmetric flow formations within the glottis are investigated using a two-mass model of speech with asymmetric vocal fold tensioning, representative of unilateral vocal fold paralysis. A refined theoretical boundary-layer flow solver is implemented to compute the intraglottal pressures, providing a more realistic description of the flow than the standard one-dimensional, inviscid Bernoulli flow solution. Vocal fold dynamics are investigated for subglottal pressures of 0.6 < ps < 1.5 kPa and tension asymmetries of 0.5 < Q < 0.8. As tension asymmetries become pronounced the asymmetric flow incites nonlinear behavior in the vocal fold dynamics at subglottal pressures that are associated with normal speech, behavior that is not captured with standard Bernoulli flow solvers. Regions of bifurcation, coexistence of solutions, and chaos are identified.

  7. Numerical solutions of incompressible Navier-Stokes equations using modified Bernoulli's law

    NASA Astrophysics Data System (ADS)

    Shatalov, A.; Hafez, M.

    2003-11-01

    Simulations of incompressible flows are important for many practical applications in aeronautics and beyond, particularly in the high Reynolds number regime. The present formulation is based on Helmholtz velocity decomposition where the velocity is presented as the gradient of a potential plus a rotational component. Substituting in the continuity equation yields a Poisson equation for the potential which is solved with a zero normal derivative at solid surfaces. The momentum equation is used to update the rotational component with no slip/no penetration surface boundary conditions. The pressure is related to the potential function through a special relation which is a generalization of Bernoulli's law, with a viscous term included. Results of calculations for two- and three-dimensional problems prove that the present formulation is a valid approach, with some possible benefits compared to existing methods.

  8. Chaotic dynamics of flexible Euler-Bernoulli beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awrejcewicz, J., E-mail: awrejcew@p.lodz.pl; Krysko, A. V., E-mail: anton.krysko@gmail.com; Kutepov, I. E., E-mail: iekutepov@gmail.com

    2013-12-15

    Mathematical modeling and analysis of spatio-temporal chaotic dynamics of flexible simple and curved Euler-Bernoulli beams are carried out. The Kármán-type geometric non-linearity is considered. Algorithms reducing partial differential equations which govern the dynamics of studied objects and associated boundary value problems are reduced to the Cauchy problem through both Finite Difference Method with the approximation of O(c{sup 2}) and Finite Element Method. The obtained Cauchy problem is solved via the fourth and sixth-order Runge-Kutta methods. Validity and reliability of the results are rigorously discussed. Analysis of the chaotic dynamics of flexible Euler-Bernoulli beams for a series of boundary conditions ismore » carried out with the help of the qualitative theory of differential equations. We analyze time histories, phase and modal portraits, autocorrelation functions, the Poincaré and pseudo-Poincaré maps, signs of the first four Lyapunov exponents, as well as the compression factor of the phase volume of an attractor. A novel scenario of transition from periodicity to chaos is obtained, and a transition from chaos to hyper-chaos is illustrated. In particular, we study and explain the phenomenon of transition from symmetric to asymmetric vibrations. Vibration-type charts are given regarding two control parameters: amplitude q{sub 0} and frequency ω{sub p} of the uniformly distributed periodic excitation. Furthermore, we detected and illustrated how the so called temporal-space chaos is developed following the transition from regular to chaotic system dynamics.« less

  9. Caught in the Draft

    NASA Astrophysics Data System (ADS)

    Edge, Ron

    2007-09-01

    We've all seen (in movies, newscasts, or perhaps in person) the violent effect of the downwash that occurs when a helicopter hovers over the ground. Leaves, grass, and debris are dramatically blown about. We've also sat in front of circulating room fans and felt a large draft, whereas there seems to be very little air movement behind the fan. The cause of this is a delightful manifestation of Bernoulli's principle. The fan blades, or helicopter rotor blades, produce a pressure differential as air passes through them—let us say p1 before and p2 after, as shown in Fig. 1, with p2 greater than p1. If p0 is the ambient pressure, Bernoulli's equation gives p0=p1 +(1/2)ρv12, where v1 is the velocity of the air entering the fan. Continuity requires that v2 leaving the fan must equal v1 entering the fan for an incompressible fluid, approximately true here (Av1 = Av2, where A is the area swept out by the blades, the "rotor disk area"). However, some distance below the rotor (or in front of the fan) the velocity is vd (vdowndraft in the figure) and the pressure again p0, so Bernoulli gives us p2 + (1/2)ρv22 = (p1 + Δp) + (1/2) ρv12 = [p1 + (p2 - p1)] +(1/2) ρv12 = p2 + (1/2)ρv12 = p0 + (1/2) ρvd2.

  10. A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits

    PubMed Central

    Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.

    2012-01-01

    Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242

  11. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  12. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  13. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  14. One-Time Pad as a nonlinear dynamical system

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin

    2012-11-01

    The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography, DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luröth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized Arithmetic Coding - a scheme for joint compression and encryption.

  15. Go Fly a Tetrahedron!

    ERIC Educational Resources Information Center

    Cowens, John

    1995-01-01

    Describes a science unit used in a fourth-grade class to teach students about Bernoulli's law of flight, the similarity of tetrahedrons to birds, and the construction of tetrahedron kites. Also includes thought-provoking math questions for students. (MDM)

  16. Science Notes.

    ERIC Educational Resources Information Center

    Shaw, G. W.; And Others

    1989-01-01

    Provides a reading list for A- and S-level biology. Contains several experiments and demonstrations with topics on: the intestine, bullock corneal cells, valences, the science of tea, automated hydrolysis, electronics characteristics, bromine diffusion, enthalpy of vaporization determination, thermometers, pendulums, hovercraft, Bernoulli fluid…

  17. Alternate solution to generalized Bernoulli equations via an integrating factor: an exact differential equation approach

    NASA Astrophysics Data System (ADS)

    Tisdell, C. C.

    2017-08-01

    Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.

  18. Dynamics of 3D Timoshenko gyroelastic beams with large attitude changes for the gyros

    NASA Astrophysics Data System (ADS)

    Hassanpour, Soroosh; Heppler, G. R.

    2016-01-01

    This work is concerned with the theoretical development of dynamic equations for undamped gyroelastic beams which are dynamic systems with continuous inertia, elasticity, and gyricity. Assuming unrestricted or large attitude changes for the axes of the gyros and utilizing generalized Hooke's law, Duleau torsion theory, and Timoshenko bending theory, the energy expressions and equations of motion for the gyroelastic beams in three-dimensional space are derived. The so-obtained comprehensive gyroelastic beam model is compared against earlier gyroelastic beam models developed using Euler-Bernoulli beam models and is used to study the dynamics of gyroelastic beams through numerical examples. It is shown that there are significant differences between the developed unrestricted Timoshenko gyroelastic beam model and the previously derived zero-order restricted Euler-Bernoulli gyroelastic beam models. These differences are more pronounced in the short beam and transverse gyricity cases.

  19. Bernoulli substitution in the Ramsey model: Optimal trajectories under control constraints

    NASA Astrophysics Data System (ADS)

    Krasovskii, A. A.; Lebedev, P. D.; Tarasyev, A. M.

    2017-05-01

    We consider a neoclassical (economic) growth model. A nonlinear Ramsey equation, modeling capital dynamics, in the case of Cobb-Douglas production function is reduced to the linear differential equation via a Bernoulli substitution. This considerably facilitates the search for a solution to the optimal growth problem with logarithmic preferences. The study deals with solving the corresponding infinite horizon optimal control problem. We consider a vector field of the Hamiltonian system in the Pontryagin maximum principle, taking into account control constraints. We prove the existence of two alternative steady states, depending on the constraints. A proposed algorithm for constructing growth trajectories combines methods of open-loop control and closed-loop regulatory control. For some levels of constraints and initial conditions, a closed-form solution is obtained. We also demonstrate the impact of technological change on the economic equilibrium dynamics. Results are supported by computer calculations.

  20. Hydrodynamic pumping of a quantum Fermi liquid in a semiconductor heterostructure

    NASA Astrophysics Data System (ADS)

    Heremans, J. J.; Kantha, D.; Chen, H.; Govorov, A. O.

    2003-03-01

    We present experimental results for a pumping mechanism observed in mesoscopic structures patterned on two-dimensional electron systems in GaAs/AlGaAs heterostructures. The experiments are performed at low temperatures, in the ballistic regime. The effect is observed as a voltage or current signal corresponding to carrier extraction from sub-micron sized apertures, when these apertures are swept by a beam of ballistic electrons. The carrier extraction, phenomenologically reminiscent of the Bernoulli pumping effect in classical fluids, has been observed in various geometries. We ascertained linearity between measured voltage and injected current in all experiments, thereby excluding rectification effects. The linear response, however, points to a fundamental difference from the Bernoulli effect in classical liquids, where the response is nonlinear and quadratic in terms of the velocity. The temperature dependence of the effect will also be presented. We thank M. Shayegan (Princeton University) for the heterostructure growth, and acknowledge support from NSF DMR-0094055.

  1. Nonlinear earthquake analysis of reinforced concrete frames with fiber and Bernoulli-Euler beam-column element.

    PubMed

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.

  2. Dynamic modelling and control of a rotating Euler-Bernoulli beam

    NASA Astrophysics Data System (ADS)

    Yang, J. B.; Jiang, L. J.; Chen, D. CH.

    2004-07-01

    Flexible motion of a uniform Euler-Bernoulli beam attached to a rotating rigid hub is investigated. Fully coupled non-linear integro-differential equations, describing axial, transverse and rotational motions of the beam, are derived by using the extended Hamilton's principle. The centrifugal stiffening effect is included in the derivation. A finite-dimensional model, including couplings of axial and transverse vibrations, and of elastic deformations and rigid motions, is obtained by the finite element method. By neglecting the axial motion, a simplified modelling, suitable for studying the transverse vibration and control of a beam with large angle and high-speed rotation, is presented. And suppressions of transverse vibrations of a rotating beam are simulated with the model by combining positive position feedback and momentum exchange feedback control laws. It is indicated that an improved performance for vibration control can be achieved with the method.

  3. Stationary spiral flow in polytropic stellar models

    PubMed Central

    Pekeris, C. L.

    1980-01-01

    It is shown that, in addition to the static Emden solution, a self-gravitating polytropic gas has a dynamic option in which there is stationary flow along spiral trajectories wound around the surfaces of concentric tori. The motion is obtained as a solution of a partial differential equation which is satisfied by the meridional stream function, coupled with Poisson's equation and a Bernoulli-type equation for the pressure (density). The pressure is affected by the whole of the Bernoulli term rather than by the centrifugal part only, which acts for a rotating model, and it may be reduced down to zero at the center. The spiral type of flow is illustrated for an incompressible fluid (n = 0), for which an exact solution is obtained. The features of the dynamic constant-density model are discussed as a basis for future comparison with the solution for compressible models. PMID:16592825

  4. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  5. A novel experimental setup to study the Hagen-Poiseuille and Bernoulli equations for a gas and determination of the viscosity of air

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Surajit

    2015-11-01

    We have performed an experiment in which we have determined the viscosity of air using the Hagen-Poiseuille equation in the proper range of the Reynolds number (Re). The experiment is novel and simple which students even at high school level can perform with minimal equipment.The experiment brings out the fact that determination of viscosity of a fluid is possible only when its Reynolds number is sufficiently small. At very large Reynolds number, the gas behaves more like an inviscid fluid and its flow rate satisfies Bernoulli’s equation. In the intermediate range of the Reynolds number, the flow rate satisfies neither the Hagen-Poiseuille equation nor the Bernoulli equation. A wide range of Reynolds numbers from 40 to about 5000 has been studied. In the case of air, this large range has not shown any sign of turbulence.

  6. Stability analysis of internally damped rotating composite shafts using a finite element formulation

    NASA Astrophysics Data System (ADS)

    Ben Arab, Safa; Rodrigues, José Dias; Bouaziz, Slim; Haddar, Mohamed

    2018-04-01

    This paper deals with the stability analysis of internally damped rotating composite shafts. An Euler-Bernoulli shaft finite element formulation based on Equivalent Single Layer Theory (ESLT), including the hysteretic internal damping of composite material and transverse shear effects, is introduced and then used to evaluate the influence of various parameters: stacking sequences, fiber orientations and bearing properties on natural frequencies, critical speeds, and instability thresholds. The obtained results are compared with those available in the literature using different theories. The agreement in the obtained results show that the developed Euler-Bernoulli finite element based on ESLT including hysteretic internal damping and shear transverse effects can be effectively used for the stability analysis of internally damped rotating composite shafts. Furthermore, the results revealed that rotor stability is sensitive to the laminate parameters and to the properties of the bearings.

  7. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  8. Echocardiographic estimation of systemic systolic blood pressure in dogs with mild mitral regurgitation.

    PubMed

    Tou, Sandra P; Adin, Darcy B; Estrada, Amara H

    2006-01-01

    Systemic hypertension is likely underdiagnosed in veterinary medicine because systemic blood pressure is rarely measured. Systemic blood pressure can theoretically be estimated by echocardiography. According to the modified Bernoulli equation (PG = 4v(2)), mitral regurgitation (MR) velocity should approximate systolic left ventricular pressure (sLVP), and therefore systolic systemic blood pressure (sSBP) in the presence of a normal left atrial pressure (LAP) and the absence of aortic stenosis. The aim of this study was to evaluate the use of echocardiography to estimate sSBP by means of the Bernoulli equation. Systemic blood pressure can be estimated by echocardiography. Seventeen dogs with mild MR. No dogs had aortic or subaortic stenosis, and all had MR with a clear continuous-wave Doppler signal and a left atrial to aorta ratio of < or = 1.6. Five simultaneous, blinded continuous-wave measurements of maximum MR velocity (Vmax) and indirect sSBP measurements (by Park's Doppler) were obtained for each dog. Pressure gradient was calculated from Vmax by means of the Bernoulli equation, averaged, and added to an assumed LAP of 8 mm Hg to calculate sLVP. Calculated sLVP was significantly correlated with indirectly measured sSBP within a range of 121 to 218 mm Hg (P = .0002, r = .78). Mean +/- SD bias was 0.1 +/- 15.3 mm Hg with limits of agreement of -29.9 to 30.1 mm Hg. Despite the significant correlation, the wide limits of agreement between the methods hinder the clinical utility of echocardiographic estimation of blood pressure.

  9. Apparatus for Teaching Physics.

    ERIC Educational Resources Information Center

    Minnix, Richard B.; Carpenter, D. Rae, Jr., Eds.

    1982-01-01

    Thirteen demonstrations using a capacitor-start induction motor fitted with an aluminum disk are described. Demonstrations illustrate principles from mechanics, fluids (Bernoulli's principle), waves (chladni patterns and doppler effect), magnetism, electricity, and light (mechanical color mixing). In addition, the instrument can measure friction…

  10. B(H) has a pure state that is not multiplicative on any masa.

    PubMed

    Akemann, Charles; Weaver, Nik

    2008-04-08

    Assuming the continuum hypothesis, we prove that Bernoulli function(H) has a pure state whose restriction to any masa is not pure. This resolves negatively old conjectures of Kadison and Singer and of Anderson.

  11. Spatiotemporal clusters of malaria cases at village level, northwest Ethiopia.

    PubMed

    Alemu, Kassahun; Worku, Alemayehu; Berhane, Yemane; Kumie, Abera

    2014-06-06

    Malaria attacks are not evenly distributed in space and time. In highland areas with low endemicity, malaria transmission is highly variable and malaria acquisition risk for individuals is unevenly distributed even within a neighbourhood. Characterizing the spatiotemporal distribution of malaria cases in high-altitude villages is necessary to prioritize the risk areas and facilitate interventions. Spatial scan statistics using the Bernoulli method were employed to identify spatial and temporal clusters of malaria in high-altitude villages. Daily malaria data were collected, using a passive surveillance system, from patients visiting local health facilities. Georeference data were collected at villages using hand-held global positioning system devices and linked to patient data. Bernoulli model using Bayesian approaches and Marcov Chain Monte Carlo (MCMC) methods were used to identify the effects of factors on spatial clusters of malaria cases. The deviance information criterion (DIC) was used to assess the goodness-of-fit of the different models. The smaller the DIC, the better the model fit. Malaria cases were clustered in both space and time in high-altitude villages. Spatial scan statistics identified a total of 56 spatial clusters of malaria in high-altitude villages. Of these, 39 were the most likely clusters (LLR = 15.62, p < 0.00001) and 17 were secondary clusters (LLR = 7.05, p < 0.03). The significant most likely temporal malaria clusters were detected between August and December (LLR = 17.87, p < 0.001). Travel away home, males and age above 15 years had statistically significant effect on malaria clusters at high-altitude villages. The study identified spatial clusters of malaria cases occurring at high elevation villages within the district. A patient who travelled away from home to a malaria-endemic area might be the most probable source of malaria infection in a high-altitude village. Malaria interventions in high altitude villages should address factors associated with malaria clustering.

  12. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  13. Alternative Proofs for Inequalities of Some Trigonometric Functions

    ERIC Educational Resources Information Center

    Guo, Bai-Ni; Qi, Feng

    2008-01-01

    By using an identity relating to Bernoulli's numbers and power series expansions of cotangent function and logarithms of functions involving sine function, cosine function and tangent function, four inequalities involving cotangent function, sine function, secant function and tangent function are established.

  14. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    NASA Astrophysics Data System (ADS)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  15. Industrial entrepreneurial network: Structural and functional analysis

    NASA Astrophysics Data System (ADS)

    Medvedeva, M. A.; Davletbaev, R. H.; Berg, D. B.; Nazarova, J. J.; Parusheva, S. S.

    2016-12-01

    Structure and functioning of two model industrial entrepreneurial networks are investigated in the present paper. One of these networks is forming when implementing an integrated project and consists of eight agents, which interact with each other and external environment. The other one is obtained from the municipal economy and is based on the set of the 12 real business entities. Analysis of the networks is carried out on the basis of the matrix of mutual payments aggregated over the certain time period. The matrix is created by the methods of experimental economics. Social Network Analysis (SNA) methods and instruments were used in the present research. The set of basic structural characteristics was investigated: set of quantitative parameters such as density, diameter, clustering coefficient, different kinds of centrality, and etc. They were compared with the random Bernoulli graphs of the corresponding size and density. Discovered variations of random and entrepreneurial networks structure are explained by the peculiarities of agents functioning in production network. Separately, were identified the closed exchange circuits (cyclically closed contours of graph) forming an autopoietic (self-replicating) network pattern. The purpose of the functional analysis was to identify the contribution of the autopoietic network pattern in its gross product. It was found that the magnitude of this contribution is more than 20%. Such value allows using of the complementary currency in order to stimulate economic activity of network agents.

  16. Ergodicity of two hard balls in integrable polygons

    NASA Astrophysics Data System (ADS)

    Bálint, Péter; Troubetzkoy, Serge

    2004-11-01

    We prove the hyperbolicity, ergodicity and thus the Bernoulli property of two hard balls in one of the following four polygons: the square, the equilateral triangle, the 45°-45°-90° triangle or the 30°-60°-90° triangle.

  17. Biomechanics of hair cell kinocilia: experimental measurement of kinocilium shaft stiffness and base rotational stiffness with Euler–Bernoulli and Timoshenko beam analysis

    PubMed Central

    Spoon, Corrie; Grant, Wally

    2011-01-01

    Vestibular hair cell bundles in the inner ear contain a single kinocilium composed of a 9+2 microtubule structure. Kinocilia play a crucial role in transmitting movement of the overlying mass, otoconial membrane or cupula to the mechanotransducing portion of the hair cell bundle. Little is known regarding the mechanical deformation properties of the kinocilium. Using a force-deflection technique, we measured two important mechanical properties of kinocilia in the utricle of a turtle, Trachemys (Pseudemys) scripta elegans. First, we measured the stiffness of kinocilia with different heights. These kinocilia were assumed to be homogenous cylindrical rods and were modeled as both isotropic Euler–Bernoulli beams and transversely isotropic Timoshenko beams. Two mechanical properties of the kinocilia were derived from the beam analysis: flexural rigidity (EI) and shear rigidity (kGA). The Timoshenko model produced a better fit to the experimental data, predicting EI=10,400 pN μm2 and kGA=247 pN. Assuming a homogenous rod, the shear modulus (G=1.9 kPa) was four orders of magnitude less than Young's modulus (E=14.1 MPa), indicating that significant shear deformation occurs within deflected kinocilia. When analyzed as an Euler–Bernoulli beam, which neglects translational shear, EI increased linearly with kinocilium height, giving underestimates of EI for shorter kinocilia. Second, we measured the rotational stiffness of the kinocilium insertion (κ) into the hair cell's apical surface. Following BAPTA treatment to break the kinocilial links, the kinocilia remained upright, and κ was measured as 177±47 pN μm rad–1. The mechanical parameters we quantified are important for understanding how forces arising from head movement are transduced and encoded by hair cells. PMID:21307074

  18. Localization of quantum Bernoulli noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Caishi; Zhang, Jihong

    2013-10-15

    The family (∂{sub k},∂{sub k}{sup *}){sub k≥0} of annihilation and creation operators acting on square integrable functionals of a Bernoulli process Z= (Z{sub k}){sub k⩾0} can be interpreted as quantum Bernoulli noises. In this note we consider the operator family (ℓ{sub k},ℓ{sub k}{sup *}){sub k≥0}, where ℓ{sub k}=∂{sub k}E{sub k} with E{sub k} being the conditional expectation (operator) given σ-field σ(Z{sub j}; 0 ⩽j⩽k). We show that ℓ{sub k} (resp. ℓ{sub k}{sup *}) is essentially a kind of localization of the annihilation operator ∂{sub k} (resp. creation operator ∂{sub k}{sup *}). We examine properties of the family (ℓ{sub k},ℓ{sub k}{supmore » *}){sub k≥0} and prove, among other things, that ℓ{sub k} and ℓ{sub k}{sup *} satisfy a local canonical anti-communication relation and (ℓ{sub k}{sup *}){sub k≥0} forms a mutually orthogonal operator sequence although each ℓ{sub k} is not a projection operator. We find that the operator series Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}Xℓ{sub k} converges in the strong operator topology for each bounded operator X acting on square integrable functionals of Z. In particular we get an explicit sum of the operator series Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}ℓ{sub k}. A useful norm estimate on Σ{sub k=0}{sup ∞}ℓ{sub k}{sup *}Xℓ{sub k} is also obtained. Finally we show applications of our main results to quantum dynamical semigroups and quantum probability.« less

  19. Gap Flows through Idealized Topography. Part I: Forcing by Large-Scale Winds in the Nonrotating Limit.

    NASA Astrophysics Data System (ADS)

    Gabersek, Sasa.; Durran, Dale R.

    2004-12-01

    Gap winds produced by a uniform airstream flowing over an isolated flat-top ridge cut by a straight narrow gap are investigated by numerical simulation. On the scale of the entire barrier, the proportion of the oncoming flow that passes through the gap is relatively independent of the nondimensional mountain height , even over that range of for which there is the previously documented transition from a “flow over the ridge” regime to a “flow around” regime.The kinematics and dynamics of the gap flow itself were investigated by examining mass and momentum budgets for control volumes at the entrance, central, and exit regions of the gap. These analyses suggest three basic behaviors: the linear regime (small ) in which there is essentially no enhancement of the gap flow; the mountain wave regime ( 1.5) in which vertical mass and momentum fluxes play a crucial role in creating very strong winds near the exit of the gap; and the upstream-blocking regime ( 5) in which lateral convergence generates the strongest winds near the entrance of the gap.Trajectory analysis of the flow in the strongest events, the mountain wave events, confirms the importance of net subsidence in creating high wind speeds. Neglect of vertical motion in applications of Bernoulli's equation to gap flows is shown to lead to unreasonable wind speed predictions whenever the temperature at the gap exit exceeds that at the gap entrance. The distribution of the Bernoulli function on an isentropic surface shows a correspondence between regions of high Bernoulli function and high wind speeds in the gap-exit jet similar to that previously documented for shallow-water flow.


  20. An Entropy-Based Measure of Dependence between Two Groups of Random Variables. Research Report. ETS RR-07-20

    ERIC Educational Resources Information Center

    Kong, Nan

    2007-01-01

    In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…

  1. Dynamic modelling and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances

    NASA Astrophysics Data System (ADS)

    Yang, Xinxin; Ge, Shuzhi Sam; He, Wei

    2018-04-01

    In this paper, both the closed-form dynamics and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances are developed. The dynamic model of the system is described with assumed modes approach and Lagrangian method. The flexible manipulators are represented as Euler-Bernoulli beams. Based on singular perturbation technique, the displacements/joint angles and flexible modes are modelled as slow and fast variables, respectively. A sliding mode control is designed for trajectories tracking of the slow subsystem under unknown but bounded disturbances, and an adaptive sliding mode control is derived for slow subsystem under unknown slowly time-varying disturbances. An optimal linear quadratic regulator method is proposed for the fast subsystem to damp out the vibrations of the flexible manipulators. Theoretical analysis validates the stability of the proposed composite controller. Numerical simulation results demonstrate the performance of the closed-loop flexible space robot system.

  2. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  3. Excitation of ship waves by a submerged object: New solution to the classical problem

    NASA Astrophysics Data System (ADS)

    Arzhannikov, A. V.; Kotelnikov, I. A.

    2016-08-01

    We have proposed a new method for solving the problem of ship waves excited on the surface of a nonviscous liquid by a submerged object that moves at a variable speed. As a first application of this method, we have obtained a new solution to the classic problem of ship waves generated by a submerged ball that moves rectilinearly with constant velocity parallel to the equilibrium surface of the liquid. For this example, we have derived asymptotic expressions describing the vertical displacement of the liquid surface in the limit of small and large values of the Froude number. The exact solution is presented in the form of two terms, each of which is reduced to one-dimensional integrals. One term describes the "Bernoulli hump" and another term the "Kelvin wedge." As a second example, we considered vertical oscillation of the submerged ball. In this case, the solution leads to the calculation of one-dimensional integral and describes surface waves propagating from the epicenter above the ball.

  4. Excitation of ship waves by a submerged object: New solution to the classical problem.

    PubMed

    Arzhannikov, A V; Kotelnikov, I A

    2016-08-01

    We have proposed a new method for solving the problem of ship waves excited on the surface of a nonviscous liquid by a submerged object that moves at a variable speed. As a first application of this method, we have obtained a new solution to the classic problem of ship waves generated by a submerged ball that moves rectilinearly with constant velocity parallel to the equilibrium surface of the liquid. For this example, we have derived asymptotic expressions describing the vertical displacement of the liquid surface in the limit of small and large values of the Froude number. The exact solution is presented in the form of two terms, each of which is reduced to one-dimensional integrals. One term describes the "Bernoulli hump" and another term the "Kelvin wedge." As a second example, we considered vertical oscillation of the submerged ball. In this case, the solution leads to the calculation of one-dimensional integral and describes surface waves propagating from the epicenter above the ball.

  5. Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures

    NASA Astrophysics Data System (ADS)

    Shatalov, Alexander V.

    The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.

  6. EVALUATION OF RIGHT AND LEFT VENTRICULAR DIASTOLIC FILLING

    PubMed Central

    Pasipoularides, Ares

    2013-01-01

    A conceptual fluid-dynamics framework for diastolic filling is developed. The convective deceleration load (CDL) is identified as an important determinant of ventricular inflow during the E-wave (A-wave) upstroke. Convective deceleration occurs as blood moves from the inflow anulus through larger-area cross-sections toward the expanding walls. Chamber dilatation underlies previously unrecognized alterations in intraventricular flow dynamics. The larger the chamber, the larger become the endocardial surface and the CDL. CDL magnitude affects strongly the attainable E-wave (A-wave) peak. This underlies the concept of diastolic ventriculoannular disproportion. Large vortices, whose strength decreases with chamber dilatation, ensue after the E-wave peak and impound inflow kinetic energy, averting an inflow-impeding, convective Bernoulli pressure-rise. This reduces the CDL by a variable extent depending on vortical intensity. Accordingly, the filling vortex facilitates filling to varying degrees, depending on chamber volume. The new framework provides stimulus for functional genomics research, aimed at new insights into ventricular remodeling. PMID:23585308

  7. Optimization of GM(1,1) power model

    NASA Astrophysics Data System (ADS)

    Luo, Dang; Sun, Yu-ling; Song, Bo

    2013-10-01

    GM (1,1) power model is the expansion of traditional GM (1,1) model and Grey Verhulst model. Compared with the traditional models, GM (1,1) power model has the following advantage: The power exponent in the model which best matches the actual data values can be found by certain technology. So, GM (1,1) power model can reflect nonlinear features of the data, simulate and forecast with high accuracy. It's very important to determine the best power exponent during the modeling process. In this paper, according to the GM(1,1) power model of albino equation is Bernoulli equation, through variable substitution, turning it into the GM(1,1) model of the linear albino equation form, and then through the grey differential equation properly built, established GM(1,1) power model, and parameters with pattern search method solution. Finally, we illustrate the effectiveness of the new methods with the example of simulating and forecasting the promotion rates from senior secondary schools to higher education in China.

  8. Idea Bank.

    ERIC Educational Resources Information Center

    Science Teacher, 1989

    1989-01-01

    Describes classroom activities and models for migration, mutation, and isolation; a diffusion model; Bernoulli's principle; sound in a vacuum; time regression mystery of DNA; seating chart lesson plan; algae mystery laboratory; water as mass; science fair; flipped book; making a cloud; wet mount slide; timer adaptation; thread slide model; and…

  9. Characterizing information propagation through inter-vehicle communication on a simple network of two parallel roads

    DOT National Transportation Integrated Search

    2010-10-01

    In this report, we study information propagation via inter-vehicle communication along two parallel : roads. By identifying an inherent Bernoulli process, we are able to derive the mean and variance of : propagation distance. A road separation distan...

  10. Origins of astronautics in Switzerland

    NASA Technical Reports Server (NTRS)

    Wadlis, A.

    1977-01-01

    Swiss contributions to astronautics are recounted. Scientists mentioned include: Bernoulli and Euler for their early theoretical contributions; the balloonist, Auguste Piccard; J. Ackeret, for his contributions to the study of aerodynamics; the rocket propulsion pioneer, Josef Stemmer; and the Swiss space scientists, Eugster, Stettbacker, Zwicky, and Schurch.

  11. Fun with Physics.

    ERIC Educational Resources Information Center

    McGrath, Susan

    This book shows how physics relates to daily life. Chapters included are: (1) "Physics of Fun" (dealing with the concepts of friction, Bernoulli's principle, lift, buoyancy, adhesion, cohesion, surface tension, gas expansion, waves, light, mirror images, and solar cells); (2) "Physics of Nature" (illustrating the concepts of inertia, static…

  12. Learning Physics in a Water Park

    ERIC Educational Resources Information Center

    Cabeza, Cecilia; Rubido, Nicolás; Martí, Arturo C.

    2014-01-01

    Entertaining and educational experiments that can be conducted in a water park, illustrating physics concepts, principles and fundamental laws, are described. These experiments are suitable for students ranging from senior secondary school to junior university level. Newton's laws of motion, Bernoulli's equation, based on the conservation of…

  13. Towards Explaining the Water Siphon

    ERIC Educational Resources Information Center

    Jumper, William D.; Stanchev, Boris

    2014-01-01

    Many high school and introductory college physics courses cover topics in fluidics through the Bernoulli and Poiseuille equations, and consequently one might think that siphons should present an excellent opportunity to engage students in various laboratory measurement exercises incorporating these fascinating devices. However, the flow rates (or…

  14. Nonlinear Earthquake Analysis of Reinforced Concrete Frames with Fiber and Bernoulli-Euler Beam-Column Element

    PubMed Central

    Karaton, Muhammet

    2014-01-01

    A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched. PMID:24578667

  15. Glottal flow through a two-mass model: comparison of Navier-Stokes solutions with simplified models.

    PubMed

    de Vries, M P; Schutte, H K; Veldman, A E P; Verkerke, G J

    2002-04-01

    A new numerical model of the vocal folds is presented based on the well-known two-mass models of the vocal folds. The two-mass model is coupled to a model of glottal airflow based on the incompressible Navier-Stokes equations. Glottal waves are produced using different initial glottal gaps and different subglottal pressures. Fundamental frequency, glottal peak flow, and closed phase of the glottal waves have been compared with values known from the literature. The phonation threshold pressure was determined for different initial glottal gaps. The phonation threshold pressure obtained using the flow model with Navier-Stokes equations corresponds better to values determined in normal phonation than the phonation threshold pressure obtained using the flow model based on the Bernoulli equation. Using the Navier-Stokes equations, an increase of the subglottal pressure causes the fundamental frequency and the glottal peak flow to increase, whereas the fundamental frequency in the Bernoulli-based model does not change with increasing pressure.

  16. Nonequilibrium Transport and the Bernoulli Effect of Electrons in a Two-Dimensional Electron Gas

    NASA Astrophysics Data System (ADS)

    Kaya, Ismet I.

    2013-02-01

    Nonequilibrium transport of charged carriers in a two-dimensional electron gas is summarized from an experimental point of view. The transport regime in which the electron-electron interactions are enhanced at high bias leads to a range of striking effects in a two-dimensional electron gas. This regime of transport is quite different than the ballistic transport in which particles propagate coherently with no intercarrier energy transfer and the diffusive transport in which the momentum of the electron system is lost with the involvement of the phonons. Quite a few hydrodynamic phenomena observed in classical gasses have the electrical analogs in the current flow. When intercarrier scattering events dominate the transport, the momentum sharing via narrow angle scattering among the hot and cold electrons lead to negative resistance and electron pumping which can be viewed as the analog of the Bernoulli-Venturi effect observed classical gasses. The recent experimental findings and the background work in the field are reviewed.

  17. Long-term stable time integration scheme for dynamic analysis of planar geometrically exact Timoshenko beams

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed

    2017-05-01

    In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.

  18. Three dimensional steady subsonic Euler flows in bounded nozzles

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Xie, Chunjing

    The existence and uniqueness of three dimensional steady subsonic Euler flows in rectangular nozzles were obtained when prescribing normal component of momentum at both the entrance and exit. If, in addition, the normal component of the voriticity and the variation of Bernoulli's function at the entrance are both zero, then there exists a unique subsonic potential flow when the magnitude of the normal component of the momentum is less than a critical number. As the magnitude of the normal component of the momentum approaches the critical number, the associated flows converge to a subsonic-sonic flow. Furthermore, when the normal component of vorticity and the variation of Bernoulli function are both small, the existence and uniqueness of subsonic Euler flows with non-zero vorticity are established. The proof of these results is based on a new formulation for the Euler system, a priori estimate for nonlinear elliptic equations with nonlinear boundary conditions, detailed study for a linear div-curl system, and delicate estimate for the transport equations.

  19. Bernoulli, Darwin, and Sagan: the probability of life on other planets

    NASA Astrophysics Data System (ADS)

    Rossmo, D. Kim

    2017-04-01

    The recent discovery that billions of planets in the Milky Way Galaxy may be in circumstellar habitable zones has renewed speculation over the possibility of extraterrestrial life. The Drake equation is a probabilistic framework for estimating the number of technological advanced civilizations in our Galaxy; however, many of the equation's component probabilities are either unknown or have large error intervals. In this paper, a different method of examining this question is explored, one that replaces the various Drake factors with the single estimate for the probability of life existing on Earth. This relationship can be described by the binomial distribution if the presence of life on a given number of planets is equated to successes in a Bernoulli trial. The question of exoplanet life may then be reformulated as follows - given the probability of one or more independent successes for a given number of trials, what is the probability of two or more successes? Some of the implications of this approach for finding life on exoplanets are discussed.

  20. Control volume analyses of glottal flow using a fully-coupled numerical fluid-structure interaction model

    NASA Astrophysics Data System (ADS)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2013-11-01

    Vocal fold vibrations and the glottal jet are successfully simulated using the modified Immersed Finite Element method (mIFEM), a fully coupled dynamics approach to model fluid-structure interactions. A self-sustained and steady vocal fold vibration is captured given a constant pressure input at the glottal entrance. The flow rates at different axial locations in the glottis are calculated, showing small variations among them due to the vocal fold motion and deformation. To further facilitate the understanding of the phonation process, two control volume analyses, specifically with Bernoulli's equation and Newton's 2nd law, are carried out for the glottal flow based on the simulation results. A generalized Bernoulli's equation is derived to interpret the correlations between the velocity and pressure temporally and spatially along the center line which is a streamline using a half-space model with symmetry boundary condition. A specialized Newton's 2nd law equation is developed and divided into terms to help understand the driving mechanism of the glottal flow.

  1. The behavior of a liquid drop levitated and drastically flattened by an intense sound field

    NASA Technical Reports Server (NTRS)

    Lee, C. P.; Anilkumar, A. V.; Wang, Taylor G.

    1992-01-01

    The deformation and break-up are studied of a liquid drop in levitation through the radiation pressure. Using high-speed photography ripples are observed on the central membrane of the drop, atomization of the membrane by emission of satellite drops from its unstable ripples, and shattering of the drop after upward buckling like an umbrella, or after horizontal expansion like a sheet. These effects are captured on video. The ripples are theorized to be capillary waves generated by the Faraday instability excited by the sound vibration. Atomization occurs whenever the membrane becomes so thin that the vibration is sufficiently intense. The vibration leads to a destabilizing Bernoulli correction in the static pressure. Buckling occurs when an existent equilibrium is unstable to a radial (i.e., tangential) motion of the membrane because of the Bernoulli effect. Besides, the radiation stress at the rim of the drop is a suction stress which can make equilibrium impossible, leading to the horizontal expansion and the subsequent break-up.

  2. Variable Thermal-Force Bending of a Three-Layer Bar with a Compressible Filler

    NASA Astrophysics Data System (ADS)

    Starovoitov, E. I.; Leonenko, D. V.

    2017-11-01

    Deformation of a three-layer elastoplastic bar with a compressible filler in a temperature field is considered. To describe the kinematics of a pack asymmetric across its thickness, the hypothesis of broken line is accepted, according to which the Bernoulli hypothesis is true in thin bearing layers, and the Timoshenko hypothesis is valid for a filler compressible across the its thickness, with a linear approximation of displacements across the layer thickness. The work of filler in the tangential direction is taken into account. The physical stress-strain relations correspond to the theory of small elastoplastic deformations. Temperature variations are calculated from a formula obtained by averaging the thermophysical properties of layer materials across the bar thickness. Using the variational method, a system of differential equilibrium equations is derived. On the boundary, the kinematic conditions of simply supported ends of the bar are assumed. The solution of the boundary problem is reduced to the search for four functions, namely, deflections and longitudinal displacements of median surfaces of the bearing layers. An analytical solution is derived by the method of elastic solutions with the use of the Moskvitin theorem on variable loadings. Its numerical analysis is performed for the cases of continuous and local loads.

  3. Physics Proofs of Four Millennium-Problems(MP) via CATEGORY-SEMANTICS(C-S)/F=C Aristotle SQUARE-of-OPPOSITION(SoO) DEduction-LOGIC DichotomY

    NASA Astrophysics Data System (ADS)

    Clay, London; Siegel, Edward Carl-Ludwig

    2011-03-01

    Siegel-Baez Cognitive-Category-Semantics"(C-C-S) tabular list-format matrix truth-table analytics SoO jargonial-obfuscation elimination query WHAT? yields four "pure"-maths MP "Feet of Clay!!!" proofs: (1) Siegel [AMS Natl.Mtg.(02)-Abs.973-03-126: (CCNY;64)(94;Wiles)] Fermat's: Last-Thm. = Least-Action Ppl.; (2) P=/=NP TRIVIAL simple Euclid geometry/dimensions: NO computer anything"Feet of Clay!!!"; (3) Birch-Swinnerton-Dyer conjecture; (4) Riemann-hypotheses via COMBO.: Siegel[AMS Natl.Mtg.(02)-Abs.973-60-124] digits log-law inversion to ONLY BEQS with ONLY zero-digit BEC, AND Rayleigh[1870;graph-thy."short-CUT method"[Doyle-Snell, Random-Walks & Electric-Nets,MAA(81)]-"Anderson"[(58)] critical-strip C-localization!!! SoO DichotomY ("V") IdentitY: #s:(Euler v Bernoulli) = (Sets v Multisets) = Quantum-Statistics(FD v BE) = Power-Spectra(1/f(0) v 1/f(1)) = Conic-Sections(Ellipse v Hyperbola) = Extent(Locality v Globality);Siegel[(89)] (so MIScalled) "complexity" as UTTER-SIMPLICITY(!!!) v COMPLICATEDNESS MEASURE(S) definition.

  4. Mixed H2/H∞ distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements

    NASA Astrophysics Data System (ADS)

    Song, Yan; Fang, Xiaosheng; Diao, Qingda

    2016-03-01

    In this paper, we discuss the mixed H2/H∞ distributed robust model predictive control problem for polytopic uncertain systems subject to randomly occurring actuator saturation and packet loss. The global system is decomposed into several subsystems, and all the subsystems are connected by a fixed topology network, which is the definition for the packet loss among the subsystems. To better use the successfully transmitted information via Internet, both the phenomena of actuator saturation and packet loss resulting from the limitation of the communication bandwidth are taken into consideration. A novel distributed controller model is established to account for the actuator saturation and packet loss in a unified representation by using two sets of Bernoulli distributed white sequences with known conditional probabilities. With the nonlinear feedback control law represented by the convex hull of a group of linear feedback laws, the distributed controllers for subsystems are obtained by solving an linear matrix inequality (LMI) optimisation problem. Finally, numerical studies demonstrate the effectiveness of the proposed techniques.

  5. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  6. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  7. The Priority Heuristic: Making Choices without Trade-Offs

    ERIC Educational Resources Information Center

    Brandstatter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph

    2006-01-01

    Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic…

  8. Apparatus for Teaching Physics

    ERIC Educational Resources Information Center

    Gottlieb, Herbert H.

    1977-01-01

    Describes: how to measure index of refraction by the thickness method; how to teach the concept of torque using a torque wrench; how to produce a real image with a concave mirror; how to eliminate the interface effects of a Pyrex containers; and an apparatus to illustrate Bernoulli's Principle. (MLH)

  9. Understanding Wing Lift

    ERIC Educational Resources Information Center

    Silva, J.; Soares, A. A.

    2010-01-01

    The conventional explanation of aerodynamic lift based on Bernoulli's equation is one of the most common mistakes in presentations to school students and is found in children's science books. The fallacies in this explanation together with an alternative explanation for aerofoil lift have already been presented in an excellent article by Babinsky…

  10. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  11. Baseline Experiments on Coulomb Damping due to Rotational Slip

    DTIC Science & Technology

    1992-12-01

    by Griffe121 . As expected Equation (2-39) matches the result given by Griffel . 2.2.2. Euler-Bernoulli Beam versus Timeshenko Beam. Omitted from Euler...McGraw-Hill, Inc., 1983. 20. Clark, S. K., Dynamics of Continuous Elements, New Jersey, Prentice-Hall, Inc., 1972. 21. Griffel , W., Beam Formulas

  12. Asteroid Lightcurve Analysis at Elephant Head Observatory: 2012 November - 2013 April

    NASA Astrophysics Data System (ADS)

    Alkema, Michael S.

    2013-07-01

    Thirteen asteroids were observed from Elephant Head Observatory from 2012 November to 2013 April: the main-belt asteroids 227 Philosophia, 331 Etheridgea, 577 Rhea, 644 Cosima, 850 Altona, 906 Repsolda, 964 Subamara, 973 Aralia, 1016 Anitra, 1024 Hale, 2034 Bernoulli, 2556 Louise, and Jupiter Trojan 3063 Makhaon.

  13. Eradicating a Disease: Lessons from Mathematical Epidemiology

    ERIC Educational Resources Information Center

    Glomski, Matthew; Ohanian, Edward

    2012-01-01

    Smallpox remains the only human disease ever eradicated. In this paper, we consider the mathematics behind control strategies used in the effort to eradicate smallpox, from the life tables of Daniel Bernoulli, to the more modern susceptible-infected-removed (SIR)-type compartmental models. In addition, we examine the mathematical feasibility of…

  14. When Science Soars.

    ERIC Educational Resources Information Center

    Baird, Kate A.; And Others

    1997-01-01

    Describes an inquiry-based activity involving paper airplanes that has been used as a preservice training tool for instructors of a Native American summer science camp, and as an activity for demonstrating inquiry-based methods in a secondary science methods course. Focuses on Bernoulli's principle which describes how fluids move over and around…

  15. Methods for the identification of material parameters in distributed models for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.; Rosen, I. G.

    1986-01-01

    Theoretical and numerical results are presented for inverse problems involving estimation of spatially varying parameters such as stiffness and damping in distributed models for elastic structures such as Euler-Bernoulli beams. An outline of algorithms used and a summary of computational experiences are presented.

  16. Filling or Draining a Water Bottle with Two Holes

    ERIC Educational Resources Information Center

    Cross, Rod

    2016-01-01

    Three simple experiments are described using a small water bottle with two holes in the side of the bottle. The main challenge is to predict and then explain the observations, but the arrangements can also be used for quantitative measurements concerning hydrostatic pressure, Bernoulli's equation, surface tension and bubble formation.

  17. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1979-01-01

    Describes the following: a low-pressure sodium light source; a design of hot cathodes for plasma and electron physics experiments; a demonstration cart for a physics of sound course; Bernoulli force using coffee cups; a spark recording for the linear air track; and a demonstration of the effect of altering the cavity resonance of a violin. (GA)

  18. Capillary waves in the subcritical nonlinear Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozyreff, G.

    2010-01-15

    We expand recent results on the nonlinear Schroedinger equation with cubic-quintic nonlinearity to show that some solutions are described by the Bernoulli equation in the presence of surface tension. As a consequence, capillary waves are predicted and found numerically at the interface between regions of large and low amplitude.

  19. Structural Influence of Dynamics of Bottom Loads

    DTIC Science & Technology

    2014-02-10

    using the Numerette research craft, are underway. Early analytic research on slamming was done by von Karman [5] using a momentum approach, and by...pressure q{x,t) as two constant pressures, qi and qj, traveling at a constant speed c. Using the Euler- Bernoulli beam assumptions the governing

  20. Nonlinear Acoustic Metamaterials for Sound Attenuation Applications

    DTIC Science & Technology

    2011-03-16

    elastic guides, which are discretized into Bernoulli -Euler beam elements [29]. We first describe the equations of particles’ motion in the DE model...to 613 N in the curved one [see Fig. 15(b)]. Overall, the area under the force-time curve, which corresponds to the amount of momentum transferred

  1. Hermann-Bernoulli-Laplace-Hamilton-Runge-Lenz Vector.

    ERIC Educational Resources Information Center

    Subramanian, P. R.; And Others

    1991-01-01

    A way for students to refresh and use their knowledge in both mathematics and physics is presented. By the study of the properties of the "Runge-Lenz" vector the subjects of algebra, analytical geometry, calculus, classical mechanics, differential equations, matrices, quantum mechanics, trigonometry, and vector analysis can be reviewed. (KR)

  2. The Cheapbook: A Compendium of Inexpensive Exhibit Ideas, 1995 Edition.

    ERIC Educational Resources Information Center

    Orselli, Paul, Ed.

    This guide includes complete installation descriptions of 30 exhibits. They include: the adjustable birthday cake, ball-in-tube, Bernoulli Box, chain wave, collapsible truss bridge, double wave device, eddy currents raceway, full-length mirror, geodesic domes, giant magnetic tangrams, harmonic cantilever, hyperboloid of revolution, lifting lever,…

  3. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  4. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  5. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  6. Evaluating atmospheric blocking in the global climate model EC-Earth

    NASA Astrophysics Data System (ADS)

    Hartung, Kerstin; Hense, Andreas; Kjellström, Erik

    2013-04-01

    Atmospheric blocking is a phenomenon of the midlatitudal troposphere, which plays an important role in climate variability. Therefore a correct representation of blocking in climate models is necessary, especially for evaluating the results of climate projections. In my master's thesis a validation of blocking in the coupled climate model EC-Earth is performed. Blocking events are detected based on the Tibaldi-Molteni Index. At first, a comparison with the reanalysis dataset ERA-Interim is conducted. The blocking frequency depending on longitude shows a small general underestimation of blocking in the model - a well known problem. Scaife et al. (2011) proposed the correction of model bias as a way to solve this problem. However, applying the correction to the higher resolution EC-Earth model does not yield any improvement. Composite maps show a link between blocking events and surface variables. One example is the formation of a positive surface temperature anomaly north and a negative anomaly south of the blocking anticyclone. In winter the surface temperature in EC-Earth can be reproduced quite well, but in summer a cold bias over the inner-European ocean is present. Using generalized linear models (GLMs) I want to study the connection between regional blocking and global atmospheric variables further. GLMs have the advantage of being applicable to non-Gaussian variables. Therefore the blocking index at each longitude, which is Bernoulli distributed, can be analysed statistically with GLMs. I applied a logistic regression between the blocking index and the geopotential height at 500 hPa to study the teleconnection of blocking events at midlatitudes with global geopotential height. GLMs also offer the possibility of quantifying the connections shown in composite maps. The implementation of the logistic regression can even be expanded to a search for trends in blocking frequency, for example in the scenario simulations.

  7. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  8. Small-area spatiotemporal analysis of heatwave impacts on elderly mortality in Paris: A cluster analysis approach.

    PubMed

    Benmarhnia, Tarik; Kihal-Talantikite, Wahida; Ragettli, Martina S; Deguen, Séverine

    2017-08-15

    Heat-waves have a substantial public health burden. Understanding spatial heterogeneity at a fine spatial scale in relation to heat and related mortality is central to target interventions towards vulnerable communities. To determine the spatial variability of heat-wave-related mortality risk among elderly in Paris, France at the census block level. We also aimed to assess area-level social and environmental determinants of high mortality risk within Paris. We used daily mortality data from 2004 to 2009 among people aged >65 at the French census block level within Paris. We used two heat wave days' definitions that were compared to non-heat wave days. A Bernoulli cluster analysis method was applied to identify high risk clusters of mortality during heat waves. We performed random effects meta-regression analyses to investigate factors associated with the magnitude of the mortality risk. The spatial approach revealed a spatial aggregation of death cases during heat wave days. We found that small scale chronic PM 10 exposure was associated with a 0.02 (95% CI: 0.001; 0.045) increase of the risk of dying during a heat wave episode. We also found a positive association with the percentage of foreigners and the percentage of labor force, while the proportion of elderly people living in the neighborhood was negatively associated. We also found that green space density had a protective effect and inversely that the density of constructed feature increased the risk of dying during a heat wave episode. We showed that a spatial variation in terms of heat-related vulnerability exists within Paris and that it can be explained by some contextual factors. This study can be useful for designing interventions targeting more vulnerable areas and reduce the burden of heat waves. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Translational Bounds for Factorial n and the Factorial Polynomial

    ERIC Educational Resources Information Center

    Mahmood, Munir; Edwards, Phillip

    2009-01-01

    During the period 1729-1826 Bernoulli, Euler, Goldbach and Legendre developed expressions for defining and evaluating "n"! and the related gamma function. Expressions related to "n"! and the gamma function are a common feature in computer science and engineering applications. In the modern computer age people live in now, two common tests to…

  10. Beckham as Physicist?

    ERIC Educational Resources Information Center

    Ireson, Gren

    2001-01-01

    If football captures the interest of students, it can be used to teach physics. In this case, a Beckham free-kick can be used to introduce concepts such as drag, the Bernoulli principle, Reynolds number, and the Magnus effect by asking the simple question: How does he curve the ball so much? Introduces basic mechanics along the way. (Author/ASK)

  11. Multi-Object Filtering for Space Situational Awareness

    DTIC Science & Technology

    2014-06-01

    labelling such as the labelled multi- Bernoulli filter [27]. 3.2 Filter derivation: key modelling assumptions Ouf of the general filtering framework [14...radiation pressure in the canon- ball model has been taken into account, leading to the following acceleration: arad = −Fp · C A m E c AEarth |r− rSun| esatSun

  12. The Demise of Decision Making: How Information Superiority Degrades Our Ability to Make Decisions

    DTIC Science & Technology

    2013-05-20

    studied the topic of risk in relation to decision making. In fact, Daniel Bernoulli produced findings in 1738 connecting risk aversion to wealth and...determined that they were stalled for some reason and not fighting. 34 Angry of this unplanned halt and potential loss of momentum , Franks sought answers

  13. Half Empty or Half Full?

    ERIC Educational Resources Information Center

    Rohr, Tyler; Rohr, Jim

    2015-01-01

    Previously appearing in this journal were photographs of a physics apparatus, developed circa 1880, that was believed to be used to demonstrate the "Bernoulli effect." Drawings of these photographs appear here and show that when there is no flow, the water level h[subscript PT2] in the piezometer tube at location (2) is at the same level…

  14. The Counter-Intuitive Non-Informative Prior for the Bernoulli Family

    ERIC Educational Resources Information Center

    Zhu, Mu; Lu, Arthur Y.

    2004-01-01

    In Bayesian statistics, the choice of the prior distribution is often controversial. Different rules for selecting priors have been suggested in the literature, which, sometimes, produce priors that are difficult for the students to understand intuitively. In this article, we use a simple heuristic to illustrate to the students the rather…

  15. Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge

    NASA Astrophysics Data System (ADS)

    Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam

    2018-04-01

    The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).

  16. Bridging the Gap Between Stationary Homogeneous Isotropic Turbulence and Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Sohrab, Siavash

    A statistical theory of stationary isotropic turbulence is presented with eddies possessing Gaussian velocity distribution, Maxwell-Boltzmann speed distribution in harmony with perceptions of Heisenberg, and Planck energy distribution in harmony with perceptions of Chandrasekhar and in agreement with experimental observations of Van Atta and Chen. Defining the action S = - mΦ in terms of velocity potential of atomic motion, scale-invariant Schrödinger equation is derivedfrom invariant Bernoulli equation. Thus, the gap between the problems of turbulence and quantum mechanics is closed through connections between Cauchy-Euler-Bernoulli equations of hydrodynamics, Hamilton-Jacobi equation of classical mechanics, and finally Schrödinger equation of quantum mechanics. Transitions of particle (molecular cluster cji) from a small rapidly-oscillating eddy ej (high-energy level-j) to a large slowly-oscillating eddy ei (low energy-level-i) leads to emission of a sub-particle (molecule mji) that carries away the excess energy ɛji = h (νj -νi) in harmony with Bohr theory of atomic spectra. ∖ ∖ NASA Grant No. NAG3-1863.

  17. Acoustic Attraction

    NASA Astrophysics Data System (ADS)

    Oviatt, Eric; Patsiaouris, Konstantinos; Denardo, Bruce

    2009-11-01

    A sound source of finite size produces a diverging traveling wave in an unbounded fluid. A rigid body that is small compared to the wavelength experiences an attractive radiation force (toward the source). An attractive force is also exerted on the fluid itself. The effect can be demonstrated with a styrofoam ball suspended near a loudspeaker that is producing sound of high amplitude and low frequency (for example, 100 Hz). The behavior can be understood and roughly calculated as a time-averaged Bernoulli effect. A rigorous scattering calculation yields a radiation force that is within a factor of two of the Bernoulli result. For a spherical wave, the force decreases as the inverse fifth power of the distance from the source. Applications of the phenomenon include ultrasonic filtration of liquids and the growth of supermassive black holes that emit sound waves in a surrounding plasma. An experiment is being conducted in an anechoic chamber with a 1-inch diameter aluminum ball that is suspended from an analytical balance. Directly below the ball is a baffled loudspeaker that exerts an attractive force that is measured by the balance.

  18. δ-Generalized Labeled Multi-Bernoulli Filter Using Amplitude Information of Neighboring Cells

    PubMed Central

    Liu, Chao; Lei, Peng; Qi, Yaolong

    2018-01-01

    The amplitude information (AI) of echoed signals plays an important role in radar target detection and tracking. A lot of research shows that the introduction of AI enables the tracking algorithm to distinguish targets from clutter better and then improves the performance of data association. The current AI-aided tracking algorithms only consider the signal amplitude in the range-azimuth cell where measurement exists. However, since radar echoes always contain backscattered signals from multiple cells, the useful information of neighboring cells would be lost if directly applying those existing methods. In order to solve this issue, a new δ-generalized labeled multi-Bernoulli (δ-GLMB) filter is proposed. It exploits the AI of radar echoes from neighboring cells to construct a united amplitude likelihood ratio, and then plugs it into the update process and the measurement-track assignment cost matrix of the δ-GLMB filter. Simulation results show that the proposed approach has better performance in target’s state and number estimation than that of the δ-GLMB only using single-cell AI in low signal-to-clutter-ratio (SCR) environment. PMID:29642595

  19. Symmetries and integrability of a fourth-order Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Bokhari, Ashfaque H.; Mahomed, F. M.; Zaman, F. D.

    2010-05-01

    The complete symmetry group classification of the fourth-order Euler-Bernoulli ordinary differential equation, where the elastic modulus and the area moment of inertia are constants and the applied load is a function of the normal displacement, is obtained. We perform the Lie and Noether symmetry analysis of this problem. In the Lie analysis, the principal Lie algebra which is one dimensional extends in four cases, viz. the linear, exponential, general power law, and a negative fractional power law. It is further shown that two cases arise in the Noether classification with respect to the standard Lagrangian. That is, the linear case for which the Noether algebra dimension is one less than the Lie algebra dimension as well as the negative fractional power law. In the latter case the Noether algebra is three dimensional and is isomorphic to the Lie algebra which is sl(2,R). This exceptional case, although admitting the nonsolvable algebra sl(2,R), remarkably allows for a two-parameter family of exact solutions via the Noether integrals. The Lie reduction gives a second-order ordinary differential equation which has nonlocal symmetry.

  20. Discriminative Bayesian Dictionary Learning for Classification.

    PubMed

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  1. Collision partner selection schemes in DSMC: From micro/nano flows to hypersonic flows

    NASA Astrophysics Data System (ADS)

    Roohi, Ehsan; Stefanov, Stefan

    2016-10-01

    The motivation of this review paper is to present a detailed summary of different collision models developed in the framework of the direct simulation Monte Carlo (DSMC) method. The emphasis is put on a newly developed collision model, i.e., the Simplified Bernoulli trial (SBT), which permits efficient low-memory simulation of rarefied gas flows. The paper starts with a brief review of the governing equations of the rarefied gas dynamics including Boltzmann and Kac master equations and reiterates that the linear Kac equation reduces to a non-linear Boltzmann equation under the assumption of molecular chaos. An introduction to the DSMC method is provided, and principles of collision algorithms in the DSMC are discussed. A distinction is made between those collision models that are based on classical kinetic theory (time counter, no time counter (NTC), and nearest neighbor (NN)) and the other class that could be derived mathematically from the Kac master equation (pseudo-Poisson process, ballot box, majorant frequency, null collision, Bernoulli trials scheme and its variants). To provide a deeper insight, the derivation of both collision models, either from the principles of the kinetic theory or the Kac master equation, is provided with sufficient details. Some discussions on the importance of subcells in the DSMC collision procedure are also provided and different types of subcells are presented. The paper then focuses on the simplified version of the Bernoulli trials algorithm (SBT) and presents a detailed summary of validation of the SBT family collision schemes (SBT on transient adaptive subcells: SBT-TAS, and intelligent SBT: ISBT) in a broad spectrum of rarefied gas-flow test cases, ranging from low speed, internal micro and nano flows to external hypersonic flow, emphasizing first the accuracy of these new collision models and second, demonstrating that the SBT family scheme, if compared to other conventional and recent collision models, requires smaller number of particles per cell to obtain sufficiently accurate solutions.

  2. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  3. Networked iterative learning control design for discrete-time systems with stochastic communication delay in input and output channels

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ruan, Xiaoe

    2017-07-01

    This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.

  4. Fluid Structure Modeling and SImulation of a Modified KC-135R Icing Tanker Boom

    DTIC Science & Technology

    2013-01-07

    representative boom. Bernoulli beam elements with six degrees of freedom per node are used to model the water tubes. Each tube was discretized with 101... ball vertex spring analogy and leverages the ALE formulation of AERO-F. The number of increments used to deform the mesh in the vicinity of the

  5. Fluid-Structure Modeling and Simulation of a Modified KC-135R Icing Tanker Boom

    DTIC Science & Technology

    2013-01-07

    representative boom. Bernoulli beam elements with six degrees of freedom per node are used to model the water tubes. Each tube was discretized with 101... ball vertex spring analogy and leverages the ALE formulation of AERO-F. The number of increments used to deform the mesh in the vicinity of the

  6. Creating a Project on Difference Equations with Primary Sources: Challenges and Opportunities

    ERIC Educational Resources Information Center

    Ruch, David

    2014-01-01

    This article discusses the creation of a student project about linear difference equations using primary sources. Early 18th-century developments in the area are outlined, focusing on efforts by Abraham De Moivre (1667-1754) and Daniel Bernoulli (1700-1782). It is explained how primary sources from these authors can be used to cover material…

  7. Using PISA 2003, Examining the Factors Affecting Students' Mathematics Achievement

    ERIC Educational Resources Information Center

    Demir, Ibrahim; Kilic, Serpil

    2010-01-01

    The purpose of this study is to examine the effects of learning strategies on mathematics achievement. The sample was compiled from students who participated in Programme for International Student Assessment (PISA) in Turkey. The data consisted of 4493 15 years old Turkish students in 158 schools, and analyzed by two levels Bernoulli model as a…

  8. The Physics of Flight: I. Fixed and Rotating Wings

    ERIC Educational Resources Information Center

    Linton, J. Oliver

    2007-01-01

    Almost all elementary textbook explanations of the theory of flight rely heavily on Bernoulli's principle and the fact that air travels faster over a wing than below it. In recent years the inadequacies and, indeed, fallacies in this explanation have been exposed (see Babinsky's excellent article in 2003 Phys. Educ. 38 497-503) and it is now…

  9. Degenerate Cauchy numbers of the third kind.

    PubMed

    Pyo, Sung-Soo; Kim, Taekyun; Rim, Seog-Hoon

    2018-01-01

    Since Cauchy numbers were introduced, various types of Cauchy numbers have been presented. In this paper, we define degenerate Cauchy numbers of the third kind and give some identities for the degenerate Cauchy numbers of the third kind. In addition, we give some relations between four kinds of the degenerate Cauchy numbers, the Daehee numbers and the degenerate Bernoulli numbers.

  10. Biologically-variable rhythmic auditory cues are superior to isochronous cues in fostering natural gait variability in Parkinson's disease.

    PubMed

    Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S

    2017-01-01

    Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An instrumental variable random-coefficients model for binary outcomes

    PubMed Central

    Chesher, Andrew; Rosen, Adam M

    2014-01-01

    In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048

  12. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  13. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  14. Physics Proofs of Four Millennium-Problems(MP) via CATEGORY-SEMANTICS(C-S)/F=C Aristotle SQUARE-of-OPPOSITION(SoO) DEduction-LOGIC DichotomY

    NASA Astrophysics Data System (ADS)

    Clay, L.; Siegel, E.

    2010-03-01

    Siegel-Baez C-S/F=C tabular list-format matrix truth-table analytics SoO jargonial-obfuscation elimination query WHAT? yields four ``pure''-maths MP ``Feet of Clay!!!'' proofs:(1)Siegel [AMS Natl.Mtg.(2002)-Abs.#:973-03-126:(@CCNY;1964!!!)<<<(1994; Wiles)]Fermat's: Last-Theorem = Least-Action Principle; (2) P=/=NP TRIVIAL simple Euclid geometry/dimensions: NO computer anything;``Feet of Clay!!!''; (3)Birch-Swinnerton-Dyer conjecture; (4)Riemann-hypotheses via combination of: Siegel [AMS Natl.Mtg. (2002)-Abs.#:973-60-124 digits logarithmic-law simple algebraic- inversion to ONLY BEQS with ONLY zero-digit BEC, AND Rayleigh [(1870);graph-theory ``short-CUT method''[Doyle- Snell,Random- Walks & Electric-Networks,MAA(1981)]-``Anderson'' [PRL(1958)] critical-strip 1/2 complex-plane localization!!! SoO DichotomY (``v'') IdentitY: numbers(Euler v Bernoulli) = (Sets v Multisets) = Quantum-Statistics(F.-D. v B.-E.) = Power- Spectra(1/f^(0) v 1/f^(1.000...) = Conic-Sections(Ellipse v (Parabola) v Hyperbola) = Extent(Locality v Globality); Siegel [MRS Fractals Symp.(1989)](so MIScalled)``complexity'' as UTTER- SIMPLICITY (!!!) v COMPLICATEDNESS MEASURE(S) definition.

  15. Evaluation of Digital Compressed Sensing for Real-Time Wireless ECG System with Bluetooth low Energy.

    PubMed

    Wang, Yishan; Doleschel, Sammy; Wunderlich, Ralf; Heinen, Stefan

    2016-07-01

    In this paper, a wearable and wireless ECG system is firstly designed with Bluetooth Low Energy (BLE). It can detect 3-lead ECG signals and is completely wireless. Secondly the digital Compressed Sensing (CS) is implemented to increase the energy efficiency of wireless ECG sensor. Different sparsifying basis, various compression ratio (CR) and several reconstruction algorithms are simulated and discussed. Finally the reconstruction is done by the android application (App) on smartphone to display the signal in real time. The power efficiency is measured and compared with the system without CS. The optimum satisfying basis built by 3-level decomposed db4 wavelet coefficients, 1-bit Bernoulli random matrix and the most suitable reconstruction algorithm are selected by the simulations and applied on the sensor node and App. The signal is successfully reconstructed and displayed on the App of smartphone. Battery life of sensor node is extended from 55 h to 67 h. The presented wireless ECG system with CS can significantly extend the battery life by 22 %. With the compact characteristic and long term working time, the system provides a feasible solution for the long term homecare utilization.

  16. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array.

    PubMed

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-12-08

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.

  17. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array

    PubMed Central

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-01-01

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234

  18. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  19. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-01-07

    vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk

  20. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  1. A Random Variable Related to the Inversion Vector of a Partial Random Permutation

    ERIC Educational Resources Information Center

    Laghate, Kavita; Deshpande, M. N.

    2005-01-01

    In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.

  2. A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables

    ERIC Educational Resources Information Center

    Vernizzi, Graziano; Nakai, Miki

    2015-01-01

    It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…

  3. Multilevel Model Prediction

    ERIC Educational Resources Information Center

    Frees, Edward W.; Kim, Jee-Seon

    2006-01-01

    Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…

  4. Does mean mean MEAN!? Digits For A Very Long Time Giving Us The Finger!: 1881 Statistics Log-Law was: Quanta=Digits!: BEC; Zipf 1/f-Law; Information-Thy; Random-#s = Euler V Bernoulli; Q-Computing = Arithmetic; P=/=NP SANS Complexity: Euclid 3-Mille

    NASA Astrophysics Data System (ADS)

    Siegel, Edward

    2008-03-01

    Classic statistics digits Newcomb[Am.J.Math.4,39,1881]-Weyl[Goett.Nachr.1912]-Benford[Proc.Am.Phil.Soc.78,4,51,1938]("NeWBe")probability ON-AVERAGE/MEAN log-law: =log[1+1/d]=log[(d+1)/d][google:``Benford's-Law'';"FUZZYICS": Siegel[AMS Nat.-Mtg.:2002&2008)]; Raimi[Sci.Am.221,109,1969]; Hill[Proc.AMS,123,3,887,1996]=log-base=units=SCALE-INVARIANCE!. Algebraic-inverse d=1/[ê(w)-1]: BOSONS(1924)=DIGITS(<1881): Energy-levels:ground=(d=0),first-(d=1)-excited ,... No fractions; only digit-integer-differences=quanta! Quo vadis digit =oo vs. <<,... simple-arithmetic!

  5. Fire and Water Demonstrate Law

    ERIC Educational Resources Information Center

    de Luca, R.; Ganci, S.

    2008-01-01

    In this article, the authors describe two classroom experiments that can be interpreted by means of Bernoulli's law. The first experiment uses a lighted candle in front of a mirror and a stream of air that is sent obliquely towards the mirror. The purpose of this experiment is to find out which way the flame will bend if air is blown at a given…

  6. Tracks in the Sand: Hooke's Pendulum "Cum Grano Salis"

    ERIC Educational Resources Information Center

    Babovic, Vukota; Babovic, Miloš

    2014-01-01

    The history of science remembers more than just formal facts about scientific discoveries. These side stories are often inspiring. One of them, the story of an unfulfilled death wish of Jacob Bernoulli regarding spirals, inspired us to look around ourselves. And we saw natural spirals around us, which led to the creation of a Hooke's…

  7. Flutter Instability of a Fluid-Conveying Fluid-Immersed Pipe Affixed to a Rigid Body

    DTIC Science & Technology

    2011-01-01

    rigid body, denoted by y in Fig. 4, is small. This is in addition to the Euler– Bernoulli beam assumption that the slope of the tail is small everywhere...here. These include the efficiency with which the prime mover can generate fluid momentum , pipe losses, and external drag acting on both the hull and the

  8. Bernoulli potential in type-I and weak type-II superconductors: II. Surface dipole

    NASA Astrophysics Data System (ADS)

    Lipavský, P.; Morawetz, K.; Koláček, J.; Mareš, J. J.; Brandt, E. H.; Schreiber, M.

    2004-09-01

    The Budd-Vannimenus theorem is modified to apply to superconductors in the Meissner state. The obtained identity links the surface value of the electrostatic potential to the density of free energy at the surface which allows one to evaluate the electrostatic potential observed via the capacitive pickup without the explicit solution of the charge profile.

  9. Focusing on the Nature of Causality in a Unit on Pressure: How Does It Affect Student Understanding?

    ERIC Educational Resources Information Center

    Basca, Belinda B.; Grotzer, Tina A.

    Although pressure forms the basis for understanding topics such as the internal structure of the earth, weather cycles, rock formation, Bernoulli's principle, and plate tectonics, the presence of this concept in the school curriculum is at a minimal level. This paper suggests that the ideas, misconceptions, and perceptions of students have to do…

  10. Non-contact handling device

    DOEpatents

    Reece, Mark [Albuquerque, NM; Knorovsky, Gerald A [Albuquerque, NM; MacCallum, Danny O [Edgewood, NM

    2007-05-15

    A pressurized fluid handling nozzle has a body with a first end and a second end, a fluid conduit and a recess at the second end. The first end is configured for connection to a pressurized fluid source. The fluid conduit has an inlet at the first end and an outlet at the recess. The nozzle uses the Bernoulli effect for lifting a part.

  11. Free Vibration Analysis of DWCNTs Using CDM and Rayleigh-Schmidt Based on Nonlocal Euler-Bernoulli Beam Theory

    PubMed Central

    2014-01-01

    The free vibration response of double-walled carbon nanotubes (DWCNTs) is investigated. The DWCNTs are modelled as two beams, interacting between them through the van der Waals forces, and the nonlocal Euler-Bernoulli beam theory is used. The governing equations of motion are derived using a variational approach and the free frequencies of vibrations are obtained employing two different approaches. In the first method, the two double-walled carbon nanotubes are discretized by means of the so-called “cell discretization method” (CDM) in which each nanotube is reduced to a set of rigid bars linked together by elastic cells. The resulting discrete system takes into account nonlocal effects, constraint elasticities, and the van der Waals forces. The second proposed approach, belonging to the semianalytical methods, is an optimized version of the classical Rayleigh quotient, as proposed originally by Schmidt. The resulting conditions are solved numerically. Numerical examples end the paper, in which the two approaches give lower-upper bounds to the true values, and some comparisons with existing results are offered. Comparisons of the present numerical results with those from the open literature show an excellent agreement. PMID:24715807

  12. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  13. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  14. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  15. Size-dependent geometrically nonlinear free vibration analysis of fractional viscoelastic nanobeams based on the nonlocal elasticity theory

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Faraji Oskouie, M.; Gholami, R.

    2016-01-01

    In recent decades, mathematical modeling and engineering applications of fractional-order calculus have been extensively utilized to provide efficient simulation tools in the field of solid mechanics. In this paper, a nonlinear fractional nonlocal Euler-Bernoulli beam model is established using the concept of fractional derivative and nonlocal elasticity theory to investigate the size-dependent geometrically nonlinear free vibration of fractional viscoelastic nanobeams. The non-classical fractional integro-differential Euler-Bernoulli beam model contains the nonlocal parameter, viscoelasticity coefficient and order of the fractional derivative to interpret the size effect, viscoelastic material and fractional behavior in the nanoscale fractional viscoelastic structures, respectively. In the solution procedure, the Galerkin method is employed to reduce the fractional integro-partial differential governing equation to a fractional ordinary differential equation in the time domain. Afterwards, the predictor-corrector method is used to solve the nonlinear fractional time-dependent equation. Finally, the influences of nonlocal parameter, order of fractional derivative and viscoelasticity coefficient on the nonlinear time response of fractional viscoelastic nanobeams are discussed in detail. Moreover, comparisons are made between the time responses of linear and nonlinear models.

  16. Exact solutions for the static bending of Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model

    NASA Astrophysics Data System (ADS)

    Wang, Y. B.; Zhu, X. W.; Dai, H. H.

    2016-08-01

    Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.

  17. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  18. Qualitatively Assessing Randomness in SVD Results

    NASA Astrophysics Data System (ADS)

    Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.

    2012-12-01

    Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.

  19. Science 101: What Makes a Curveball Curve?

    ERIC Educational Resources Information Center

    Robertson, William C.

    2009-01-01

    Ah, springtime, and young people's thoughts turn to... baseball, of course. But this column is not about "how" to throw a curveball, so you'll have to look that up on your own. Here, the focus is on the "why" of the curveball. There are two different things that cause a spinning ball to curve. One is known as the "Bernoulli effect" and the other…

  20. Distributed Market-Based Algorithms for Multi-Agent Planning with Shared Resources

    DTIC Science & Technology

    2013-02-01

    1 Introduction 1 2 Distributed Market-Based Multi-Agent Planning 5 2.1 Problem Formulation...over the deterministic planner, on the “test set” of scenarios with changing economies. . . 50 xi xii Chapter 1 Introduction Multi-agent planning is...representation of the objective (4.2.1). For example, for the supply chain mangement problem, we assumed a sequence of Bernoulli coin flips, which seems

  1. Singularities and non-hyperbolic manifolds do not coincide

    NASA Astrophysics Data System (ADS)

    Simányi, Nándor

    2013-06-01

    We consider the billiard flow of elastically colliding hard balls on the flat ν-torus (ν ⩾ 2), and prove that no singularity manifold can even locally coincide with a manifold describing future non-hyperbolicity of the trajectories. As a corollary, we obtain the ergodicity (actually the Bernoulli mixing property) of all such systems, i.e. the verification of the Boltzmann-Sinai ergodic hypothesis.

  2. Reshaping USAF Culture and Strategy: Lasting Themes and Emerging Trends

    DTIC Science & Technology

    2011-12-12

    operations are well-rooted in the air and space experience, near space concepts have struggled to develop the organizational 22 momentum ...space). Nevertheless, by July 2005, the near space concept had achieved sufficient momentum for General Lance Lord (then Commander of Air Force... Bernoulli ) the vertical dimension. Although operating at the upper reaches of the atmosphere, near space flight is bound by Bernoulian principles. The

  3. Boundary Layer Measurements in the Trisonic Gas-dynamics Facility Using Particle Image Velocimetery with CO2 Seeding

    DTIC Science & Technology

    2012-03-22

    understanding of fluid mechanics and aircraft design. The fundamental theories, concepts and equations developed by men like Newton, Bernoulli ...resulting instantaneous flow field data from PIV, boundary layer effects, turbulence characteristics, vortex formation, and momentum thickness, for...divided by the momentum thickness, δ2, and displacement thickness, δ1, as seen in Equations (2.8) and (2.9

  4. Identities associated with Milne-Thomson type polynomials and special numbers.

    PubMed

    Simsek, Yilmaz; Cakic, Nenad

    2018-01-01

    The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.

  5. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  6. Aerodynamics: The Wright Way

    NASA Technical Reports Server (NTRS)

    Cole, Jennifer Hansen

    2010-01-01

    This slide presentation reviews some of the basic principles of aerodynamics. Included in the presentation are: a few demonstrations of the principles, an explanation of the concepts of lift, drag, thrust and weight, a description of Bernoulli's principle, the concept of the airfoil (i.e., the shape of the wing) and how that effects lift, and the method of controlling an aircraft by manipulating the four forces using control surfaces.

  7. Comparison of Poisson and Bernoulli spatial cluster analyses of pediatric injuries in a fire district

    PubMed Central

    Warden, Craig R

    2008-01-01

    Background With limited resources available, injury prevention efforts need to be targeted both geographically and to specific populations. As part of a pediatric injury prevention project, data was obtained on all pediatric medical and injury incidents in a fire district to evaluate geographical clustering of pediatric injuries. This will be the first step in attempting to prevent these injuries with specific interventions depending on locations and mechanisms. Results There were a total of 4803 incidents involving patients less than 15 years of age that the fire district responded to during 2001–2005 of which 1997 were categorized as injuries and 2806 as medical calls. The two cohorts (injured versus medical) differed in age distribution (7.7 ± 4.4 years versus 5.4 ± 4.8 years, p < 0.001) and location type of incident (school or church 12% versus 15%, multifamily residence 22% versus 13%, single family residence 51% versus 28%, sport, park or recreational facility 3% versus 8%, public building 8% versus 7%, and street or road 3% versus 30%, respectively, p < 0.001). Using the medical incident locations as controls, there was no significant clustering for environmental or assault injuries using the Bernoulli method while there were four significant clusters for all injury mechanisms combined, 13 clusters for motor vehicle collisions, one for falls, and two for pedestrian or bicycle injuries. Using the Poisson cluster method on incidence rates by census tract identified four clusters for all injuries, three for motor vehicle collisions, four for fall injuries, and one each for environmental and assault injuries. The two detection methods shared a minority of overlapping geographical clusters. Conclusion Significant clustering occurs overall for all injury mechanisms combined and for each mechanism depending on the cluster detection method used. There was some overlap in geographic clusters identified by both methods. The Bernoulli method allows more focused cluster mapping and evaluation since it directly uses location data. Once clusters are found, interventions can be targeted to specific geographic locations, location types, ages of victims, and mechanisms of injury. PMID:18808720

  8. Comparison of Poisson and Bernoulli spatial cluster analyses of pediatric injuries in a fire district.

    PubMed

    Warden, Craig R

    2008-09-22

    With limited resources available, injury prevention efforts need to be targeted both geographically and to specific populations. As part of a pediatric injury prevention project, data was obtained on all pediatric medical and injury incidents in a fire district to evaluate geographical clustering of pediatric injuries. This will be the first step in attempting to prevent these injuries with specific interventions depending on locations and mechanisms. There were a total of 4803 incidents involving patients less than 15 years of age that the fire district responded to during 2001-2005 of which 1997 were categorized as injuries and 2806 as medical calls. The two cohorts (injured versus medical) differed in age distribution (7.7 +/- 4.4 years versus 5.4 +/- 4.8 years, p < 0.001) and location type of incident (school or church 12% versus 15%, multifamily residence 22% versus 13%, single family residence 51% versus 28%, sport, park or recreational facility 3% versus 8%, public building 8% versus 7%, and street or road 3% versus 30%, respectively, p < 0.001). Using the medical incident locations as controls, there was no significant clustering for environmental or assault injuries using the Bernoulli method while there were four significant clusters for all injury mechanisms combined, 13 clusters for motor vehicle collisions, one for falls, and two for pedestrian or bicycle injuries. Using the Poisson cluster method on incidence rates by census tract identified four clusters for all injuries, three for motor vehicle collisions, four for fall injuries, and one each for environmental and assault injuries. The two detection methods shared a minority of overlapping geographical clusters. Significant clustering occurs overall for all injury mechanisms combined and for each mechanism depending on the cluster detection method used. There was some overlap in geographic clusters identified by both methods. The Bernoulli method allows more focused cluster mapping and evaluation since it directly uses location data. Once clusters are found, interventions can be targeted to specific geographic locations, location types, ages of victims, and mechanisms of injury.

  9. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  10. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  11. Compliance-Effect Correlation Bias in Instrumental Variables Estimators

    ERIC Educational Resources Information Center

    Reardon, Sean F.

    2010-01-01

    Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…

  12. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  13. Anderson localization for radial tree-like random quantum graphs

    NASA Astrophysics Data System (ADS)

    Hislop, Peter D.; Post, Olaf

    We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.

  14. Echocardiographic features of the normofunctional Labcor-Santiago pericardial bioprosthesis.

    PubMed

    Gonzalez-Juanatey, J R; Garcia-Bengoechea, J B; Vega, M; Rubio, J; Sierra, J; Duran, D; Amaro, A; Gil, M

    1994-09-01

    Echocardiography was performed in 94 patients with a total of 99 normally functioning Labcor-Santiago bioprostheses, 62 in the aortic and 37 in the mitral position. The following variables were measured: peak and mean transvalvular velocities, peak and mean instantaneous pressure gradients as calculated from the modified Bernoulli equation, pressure half-time, cardiac index, stroke volume and effective orifice area (using continuity and Hatle equations). Regurgitation patterns were sought by transthoracic echocardiography (all valves) and, for selected mitral bioprostheses, by transesophageal echocardiography. Calculated mean aortic pressure gradient ranged from six to 10 mmHg and calculated effective aortic orifice area increased with ring diameter, with means of 1.27 cm2 for the 19 mm valve and 2.58 cm2 for the 27 mm valve. For mitral bioprostheses, mean pressure gradient ranged from 3.0 to 4.5 mmHg and calculated effective orifice area from 2.27 to 2.73 cm2. Only central regurgitation was observed. The Labcor-Santiago pericardial bioprostheses created little resistance to forward flow. In the small aortic root their hemodynamic performance was as good or better than that of other currently available devices. It is hoped that this new design will contribute increased in vivo mechanical durability.

  15. Vibrational analysis of vertical axis wind turbine blades

    NASA Astrophysics Data System (ADS)

    Kapucu, Onur

    The goal of this research is to derive a vibration model for a vertical axis wind turbine blade. This model accommodates the affects of varying relative flow angle caused by rotating the blade in the flow field, uses a simple aerodynamic model that assumes constant wind speed and constant rotation rate, and neglects the disturbance of wind due to upstream blade or post. The blade is modeled as elastic Euler-Bernoulli beam under transverse bending and twist deflections. Kinetic and potential energy equations for a rotating blade under deflections are obtained, expressed in terms of assumed modal coordinates and then plugged into Lagrangian equations where the non-conservative forces are the lift and drag forces and moments. An aeroelastic model for lift and drag forces, approximated with third degree polynomials, on the blade are obtained assuming an airfoil under variable angle of attack and airflow magnitudes. A simplified quasi-static airfoil theory is used, in which the lift and drag coefficients are not dependent on the history of the changing angle of attack. Linear terms on the resulting equations of motion will be used to conduct a numerical analysis and simulation, where numeric specifications are modified from the Sandia-17m Darrieus wind turbine by Sandia Laboratories.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghafarian, M.; Ariaei, A., E-mail: ariaei@eng.ui.ac.ir

    The free vibration analysis of a multiple rotating nanobeams' system applying the nonlocal Eringen elasticity theory is presented. Multiple nanobeams' systems are of great importance in nano-optomechanical applications. At nanoscale, the nonlocal effects become non-negligible. According to the nonlocal Euler-Bernoulli beam theory, the governing partial differential equations are derived by incorporating the nonlocal scale effects. Assuming a structure of n parallel nanobeams, the vibration of the system is described by a coupled set of n partial differential equations. The method involves a change of variables to uncouple the equations and the differential transform method as an efficient mathematical technique tomore » solve the nonlocal governing differential equations. Then a number of parametric studies are conducted to assess the effect of the nonlocal scaling parameter, rotational speed, boundary conditions, hub radius, and the stiffness coefficients of the elastic interlayer media on the vibration behavior of the coupled rotating multiple-carbon-nanotube-beam system. It is revealed that the bending vibration of the system is significantly influenced by the rotational speed, elastic mediums, and the nonlocal scaling parameters. This model is validated by comparing the results with those available in the literature. The natural frequencies are in a reasonably good agreement with the reported results.« less

  17. One-dimensional model of inertial pumping

    NASA Astrophysics Data System (ADS)

    Kornilovitch, Pavel E.; Govyadinov, Alexander N.; Markel, David P.; Torniainen, Erik D.

    2013-02-01

    A one-dimensional model of inertial pumping is introduced and solved. The pump is driven by a high-pressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a short-term impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newton-like equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channel-reservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.

  18. Numerical Modeling of Cavitating Venturi: A Flow Control Element of Propulsion System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Saxon, Jeff (Technical Monitor)

    2002-01-01

    In a propulsion system, the propellant flow and mixture ratio could be controlled either by variable area flow control valves or by passive flow control elements such as cavitating venturies. Cavitating venturies maintain constant propellant flowrate for fixed inlet conditions (pressure and temperature) and wide range of outlet pressures, thereby maintain constant, engine thrust and mixture ratio. The flowrate through the venturi reaches a constant value and becomes independent of outlet pressure when the pressure at throat becomes equal to vapor pressure. In order to develop a numerical model of propulsion system, it is necessary to model cavitating venturies in propellant feed systems. This paper presents a finite volume model of flow network of a cavitating venturi. The venturi was discretized into a number of control volumes and mass, momentum and energy conservation equations in each control volume are simultaneously solved to calculate one-dimensional pressure, density, and flowrate and temperature distribution. The numerical model predicts cavitations at the throat when outlet pressure was gradually reduced. Once cavitation starts, with further reduction of downstream pressure, no change in flowrate is found. The numerical predictions have been compared with test data and empirical equation based on Bernoulli's equation.

  19. One-dimensional model of inertial pumping.

    PubMed

    Kornilovitch, Pavel E; Govyadinov, Alexander N; Markel, David P; Torniainen, Erik D

    2013-02-01

    A one-dimensional model of inertial pumping is introduced and solved. The pump is driven by a high-pressure vapor bubble generated by a microheater positioned asymmetrically in a microchannel. The bubble is approximated as a short-term impulse delivered to the two fluidic columns inside the channel. Fluid dynamics is described by a Newton-like equation with a variable mass, but without the mass derivative term. Because of smaller inertia, the short column refills the channel faster and accumulates a larger mechanical momentum. After bubble collapse the total fluid momentum is nonzero, resulting in a net flow. Two different versions of the model are analyzed in detail, analytically and numerically. In the symmetrical model, the pressure at the channel-reservoir connection plane is assumed constant, whereas in the asymmetrical model it is reduced by a Bernoulli term. For low and intermediate vapor bubble pressures, both models predict the existence of an optimal microheater location. The predicted net flow in the asymmetrical model is smaller by a factor of about 2. For unphysically large vapor pressures, the asymmetrical model predicts saturation of the effect, while in the symmetrical model net flow increases indefinitely. Pumping is reduced by nonzero viscosity, but to a different degree depending on the microheater location.

  20. Quantum gas in the fast forward scheme of adiabatically expanding cavities: Force and equation of state

    NASA Astrophysics Data System (ADS)

    Babajanova, Gulmira; Matrasulov, Jasur; Nakamura, Katsuhiro

    2018-04-01

    With use of the scheme of fast forward which realizes quasistatic or adiabatic dynamics in shortened timescale, we investigate a thermally isolated ideal quantum gas confined in a rapidly dilating one-dimensional (1D) cavity with the time-dependent size L =L (t ) . In the fast-forward variants of equation of states, i.e., Bernoulli's formula and Poisson's adiabatic equation, the force or 1D analog of pressure can be expressed as a function of the velocity (L ˙) and acceleration (L ̈) of L besides rapidly changing state variables like effective temperature (T ) and L itself. The force is now a sum of nonadiabatic (NAD) and adiabatic contributions with the former caused by particles moving synchronously with kinetics of L and the latter by ideal bulk particles insensitive to such a kinetics. The ratio of NAD and adiabatic contributions does not depend on the particle number (N ) in the case of the soft-wall confinement, whereas such a ratio is controllable in the case of hard-wall confinement. We also reveal the condition when the NAD contribution overwhelms the adiabatic one and thoroughly changes the standard form of the equilibrium equation of states.

  1. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    NASA Astrophysics Data System (ADS)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  2. Design optimization of a brush turbine with a cleaner/water based solution

    NASA Technical Reports Server (NTRS)

    Kim, Rhyn H.

    1994-01-01

    Recently, a fluid turbine which has a brush attached to it has been designed and tested with water as fluid. The purpose of the turbine-brush is to clean up fouling in a tube. The Montreal Protocol prohibits the use of CFC products from refrigeration industry or from industry in general as a cleanser in 1996. Alternatives for the cleansers, devices or a combination of alternative devices with a cleanser should be found. One of the methods is to develop a device which cleans fouling with a cleaning medium. In this paper, we describe a turbine connected with a brush. However, the turbine with the brush should be simple and easy to install. This device is a combined small liquid turbine with a brush. The turbine is activated by the liquid flowing through the tube. Then the turbine turns the brush cleaning fouling along the tube. Based on the energy conservation and the Bernoulli equation along with an empirical relationship of drag force obtained from an experimental apparatus, a relationship of the rotational speed, the number of blades, and geometric variables of the turbine-brush was obtained. The predicted rotational speeds were compared with the experimental observations. Further work was recommended for improvements.

  3. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  5. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    ERIC Educational Resources Information Center

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  6. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  7. SATA Stochastic Algebraic Topology and Applications

    DTIC Science & Technology

    2017-01-23

    Harris et al. Selective sampling after solving a convex problem". arXiv:1609.05609 [ math , stat] (Sept. 2016). arXiv: 1609.05609. 13. Baryshnikov...Functions, Adv. Math . 245, 573-586, 2014. 15. Y. Baryshnikov, Liberzon, Daniel,Robust stability conditions for switched linear systems: Commutator bounds...Consistency via Kernel Estimation, arXiv:1407.5272 [ math , stat] (July 2014) arXiv: 1407.5272. to appear in Bernoulli 18. O.Bobrowski and S.Weinberger

  8. The time resolution of the St Petersburg paradox

    PubMed Central

    Peters, Ole

    2011-01-01

    A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904

  9. Approximation techniques for parameter estimation and feedback control for distributed models of large flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1984-01-01

    Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.

  10. Mid-IR Lasers: Challenges Imposed by the Population Dynamics of the Gain System

    DTIC Science & Technology

    2010-09-01

    MicroSystems (IOMS) Central-Field Approximation: Perturbations 1. a) Non-centrosymmetric splitting (Coulomb interaction) ⇒ total orbital angular momentum b...Accordingly: ⇒ total electron-spin momentum 2. Spin-orbit coupling (“LS” coupling) ⇒ total angular momentum lanthanides: intermediate coupling (LS / jj) 3...MicroSystems (IOMS) Luminescence Decay Curves Rate-equation for decay: Solution ( Bernoulli -Eq.): Linearized solution: T. Jensen, Ph.D. Thesis, Univ. Hamburg

  11. 1998 Physical Acoustics Summer School (PASS 98). Volume III: Background Materials.

    DTIC Science & Technology

    1998-01-01

    propagating hydrodynamic soliton ■ Shock waves, N waves, and sound eating sound ■ Acoustic Bernoulli effect ■ Acoustic levitation ■ Acoustic match ...cess. The resulting saturation values are given in the diagrams and nicely match the values given in (10). Delay reconstructions using the experimen...VOLUME 47, NUMBER 20 PHYSICAL REVIEW LETTERS 16 NOVEMBER 1981 oscillations of the driving sound field match three oscillations of the natural

  12. Computational Methods for Design, Control and Optimization

    DTIC Science & Technology

    2007-10-01

    Krueger Eugene M. Cliff Hoan Nguyen Traian Iliescu John Singler James Vance Eric Vugrin Adam Childers Dan Sutton References [11 J. T. Borggaard, S...Control, 45th IEEE Conference on Decision and Control, accepted. [11] L. C. Berselli, T. Iliescu and W. J. Layton , Mathematics of Large Eddy...Daniel Inman, Eric Ruggiero and John Singler, Finite Element For- mulation for Static Control of a Thin Euler-Bernoulli Beam Using Piezoelectric

  13. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  14. Micropolar curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for micropolar plane curved rods have been developed. 2-D theory is developed from general 2-D equations of linear micropolar elasticity using a special curvilinear system of coordinates related to the middle line of the rod and special hypothesis based on assumptions that take into account the fact that the rod is thin.High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First stress and strain tensors,vectors of displacements and rotation and body force shave been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then in the same way as in the theory of elasticity, system of differential equations in term of displacements and boundary conditions for Fourier coefficients have been obtained. The Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and 2-D equations of linear micropolar elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scale when taking in to account micropolar couple stress and rotation effects.

  15. Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.

    PubMed

    Decker, Gifford Z; Thomson, Scott L

    2007-05-01

    The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.

  16. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Greenberg, N. L.; Smedira, N. G.; Prior, D. L.; Scalia, G. M.; Thomas, J. D.; Garcia, M. J.

    2000-01-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  17. A novel approach to enhance the accuracy of vibration control of Frames

    NASA Astrophysics Data System (ADS)

    Toloue, Iraj; Shahir Liew, Mohd; Harahap, I. S. H.; Lee, H. E.

    2018-03-01

    All structures built within known seismically active regions are typically designed to endure earthquake forces. Despite advances in earthquake resistant structures, it can be inferred from hindsight that no structure is entirely immune to damage from earthquakes. Active vibration control systems, unlike the traditional methods which enlarge beams and columns, are highly effective countermeasures to reduce the effects of earthquake loading on a structure. It requires fast computation of nonlinear structural analysis in near time and has historically demanded advanced programming hosted on powerful computers. This research aims to develop a new approach for active vibration control of frames, which is applicable over both elastic and plastic material behavior. In this study, the Force Analogy Method (FAM), which is based on Hook's Law is further extended using the Timoshenko element which considers shear deformations to increase the reliability and accuracy of the controller. The proposed algorithm is applied to a 2D portal frame equipped with linear actuator, which is designed based on full state Linear Quadratic Regulator (LQR). For comparison purposes, the portal frame is analysed by both the Euler Bernoulli and Timoshenko element respectively. The results clearly demonstrate the superiority of the Timoshenko element over Euler Bernoulli for application in nonlinear analysis.

  18. Hydraulic pressures generated in magnetic ionic liquids by paramagnetic fluid/air interfaces inside of uniform tangential magnetic fields.

    PubMed

    Scovazzo, Paul; Portugal, Carla A M; Rosatella, Andreia A; Afonso, Carlos A M; Crespo, João G

    2014-08-15

    Magnetic Ionic Liquid (MILs), novel magnetic molecules that form "pure magnetic liquids," will follow the Ferrohydrodynamic Bernoulli Relationship. Based on recent literature, the modeling of this fluid system is an open issue and potentially controversial. We imposed uniform magnetic fields parallel to MIL/air interfaces where the capillary forces were negligible, the Quincke Problem. The size and location of the bulk fluid as well as the size and location of the fluid/air interface inside of the magnetic field were varied. MIL properties varied included the density, magnetic susceptibility, chemical structure, and magnetic element. Uniform tangential magnetic fields pulled the MILs up counter to gravity. The forces per area were not a function of the volume, the surface area inside of the magnetic field, or the volume displacement. However, the presence of fluid/air interfaces was necessary for the phenomena. The Ferrohydrodynamic Bernoulli Relationship predicted the phenomena with the forces being directly related to the fluid's volumetric magnetic susceptibility and the square of the magnetic field strength. [emim][FeCl4] generated the greatest hydraulic head (64-mm or 910 Pa at 1.627 Tesla). This work could aid in experimental design, when free surfaces are involved, and in the development of MIL applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Doppler echo evaluation of pulmonary venous-left atrial pressure gradients: human and numerical model studies.

    PubMed

    Firstenberg, M S; Greenberg, N L; Smedira, N G; Prior, D L; Scalia, G M; Thomas, J D; Garcia, M J

    2000-08-01

    The simplified Bernoulli equation relates fluid convective energy derived from flow velocities to a pressure gradient and is commonly used in clinical echocardiography to determine pressure differences across stenotic orifices. Its application to pulmonary venous flow has not been described in humans. Twelve patients undergoing cardiac surgery had simultaneous high-fidelity pulmonary venous and left atrial pressure measurements and pulmonary venous pulsed Doppler echocardiography performed. Convective gradients for the systolic (S), diastolic (D), and atrial reversal (AR) phases of pulmonary venous flow were determined using the simplified Bernoulli equation and correlated with measured actual pressure differences. A linear relationship was observed between the convective (y) and actual (x) pressure differences for the S (y = 0.23x + 0.0074, r = 0.82) and D (y = 0.22x + 0.092, r = 0.81) waves, but not for the AR wave (y = 0. 030x + 0.13, r = 0.10). Numerical modeling resulted in similar slopes for the S (y = 0.200x - 0.127, r = 0.97), D (y = 0.247x - 0. 354, r = 0.99), and AR (y = 0.087x - 0.083, r = 0.96) waves. Consistent with numerical modeling, the convective term strongly correlates with but significantly underestimates actual gradient because of large inertial forces.

  20. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  1. Endoscopic evaluation of therapeutic effects of "Anuloma-Viloma Pranayama" in Pratishyaya w.s.r. to mucociliary clearance mechanism and Bernoulli's principle.

    PubMed

    Bhardwaj, Atul; Sharma, Mahendra Kumar; Gupta, Manoj

    2013-10-01

    The current endeavor intended to evaluate the effectiveness and mode of action of Anuloma-Viloma Pranayama (AVP), i.e., alternate nasal breathing exercise, in resolving clinical features of Pratishyaya, i.e., rhinosinusitis. The present study was directed to validate the use of classical "saccharin test" in measuring the nasal health by measuring mucociliary clearance time. This study also highlights the effects of AVP by application of Bernoulli principle in ventilation of paranasal sinuses and surface oxygenation of nasal and paranasal sinuses ciliary epithelium. Clinically, endoscopically and radiologically diagnosed patients of Pratishyaya, i.e., rhinosinusitis, satisfying the inclusion criteria were selected to perform AVP as a breathing exercise regularly for 30 min every day in order to evaluate the effectiveness of AVP in resolving features of rhinosinusitis. Saccharin test was performed before and after completion of 40 days trial to assess the nasal ciliary activity, which has been proved to be directly related to the health of ciliary epithelium and nasal health overall as well. AVP may be regarded as a catalyst to conspicuously enhance ventilation and oxygenation of the paranasal sinuses and the positively effect the nasal respiratory epithelium by increasing better surface availability of oxygen and negative pressure in the nasal cavity itself.

  2. Endoscopic evaluation of therapeutic effects of “Anuloma-Viloma Pranayama” in Pratishyaya w.s.r. to mucociliary clearance mechanism and Bernoulli's principle

    PubMed Central

    Bhardwaj, Atul; Sharma, Mahendra Kumar; Gupta, Manoj

    2013-01-01

    The current endeavor intended to evaluate the effectiveness and mode of action of Anuloma-Viloma Pranayama (AVP), i.e., alternate nasal breathing exercise, in resolving clinical features of Pratishyaya, i.e., rhinosinusitis. The present study was directed to validate the use of classical “saccharin test” in measuring the nasal health by measuring mucociliary clearance time. This study also highlights the effects of AVP by application of Bernoulli principle in ventilation of paranasal sinuses and surface oxygenation of nasal and paranasal sinuses ciliary epithelium. Clinically, endoscopically and radiologically diagnosed patients of Pratishyaya, i.e., rhinosinusitis, satisfying the inclusion criteria were selected to perform AVP as a breathing exercise regularly for 30 min every day in order to evaluate the effectiveness of AVP in resolving features of rhinosinusitis. Saccharin test was performed before and after completion of 40 days trial to assess the nasal ciliary activity, which has been proved to be directly related to the health of ciliary epithelium and nasal health overall as well. AVP may be regarded as a catalyst to conspicuously enhance ventilation and oxygenation of the paranasal sinuses and the positively effect the nasal respiratory epithelium by increasing better surface availability of oxygen and negative pressure in the nasal cavity itself. PMID:24696572

  3. Multiple Scale Analysis of the Dynamic State Index (DSI)

    NASA Astrophysics Data System (ADS)

    Müller, A.; Névir, P.

    2016-12-01

    The Dynamic State Index (DSI) is a novel parameter that indicates local deviations of the atmospheric flow field from a stationary, inviscid and adiabatic solution of the primitive equations of fluid mechanics. This is in contrast to classical methods, which often diagnose deviations from temporal or spatial mean states. We show some applications of the DSI to atmospheric flow phenomena on different scales. The DSI is derived from the Energy-Vorticity-Theory (EVT) which is based on two global conserved quantities, the total energy and Ertel's potential enstrophy. Locally, these global quantities lead to the Bernoulli function and the PV building together with the potential temperature the DSI.If the Bernoulli function and the PV are balanced, the DSI vanishes and the basic state is obtained. Deviations from the basic state provide an indication of diabatic and non-stationary weather events. Therefore, the DSI offers a tool to diagnose and even prognose different atmospheric events on different scales.On synoptic scale, the DSI can help to diagnose storms and hurricanes, where also the dipole structure of the DSI plays an important role. In the scope of the collaborative research center "Scaling Cascades in Complex Systems" we show high correlations between the DSI and precipitation on convective scale. Moreover, we compare the results with reduced models and different spatial resolutions.

  4. Random Variables: Simulations and Surprising Connections.

    ERIC Educational Resources Information Center

    Quinn, Robert J.; Tomlinson, Stephen

    1999-01-01

    Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)

  5. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  6. Modeling the diffusion of complex innovations as a process of opinion formation through social networks.

    PubMed

    Assenova, Valentina A

    2018-01-01

    Complex innovations- ideas, practices, and technologies that hold uncertain benefits for potential adopters-often vary in their ability to diffuse in different communities over time. To explain why, I develop a model of innovation adoption in which agents engage in naïve (DeGroot) learning about the value of an innovation within their social networks. Using simulations on Bernoulli random graphs, I examine how adoption varies with network properties and with the distribution of initial opinions and adoption thresholds. The results show that: (i) low-density and high-asymmetry networks produce polarization in influence to adopt an innovation over time, (ii) increasing network density and asymmetry promote adoption under a variety of opinion and threshold distributions, and (iii) the optimal levels of density and asymmetry in networks depend on the distribution of thresholds: networks with high density (>0.25) and high asymmetry (>0.50) are optimal for maximizing diffusion when adoption thresholds are right-skewed (i.e., barriers to adoption are low), but networks with low density (<0.01) and low asymmetry (<0.25) are optimal when thresholds are left-skewed. I draw on data from a diffusion field experiment to predict adoption over time and compare the results to observed outcomes.

  7. Neighborhoods and Intimate Partner Violence Against Women: The Direct and Interactive Effects of Social Ties and Collective Efficacy.

    PubMed

    Wright, Emily M; Skubak Tillyer, Marie

    2017-06-01

    This study examines the impact of several indicators of neighborhood social ties (e.g., residents' interactions with each other; residents' ability to recognize outsiders) on intimate partner violence (IPV) against women as well as whether neighborhood collective efficacy's impact on IPV is contingent upon such ties. This study used data from 4,151 women (46% Latina, 33% African American, 17% Caucasian, on average 32 years old) in 80 neighborhoods from the Project on Human Development in Chicago Neighborhoods. We estimated a series of random effects hierarchical Bernoulli models to assess the main and interactive effects of neighborhood social ties and collective efficacy on minor and severe forms of IPV against women. Results indicate that certain neighborhood social ties are associated with higher rates of minor forms of IPV against women (but not severe forms of IPV), and collective efficacy does not appear to influence IPV against women, regardless of the level of individual or neighborhood social ties. Unlike street crime, collective efficacy does not significantly reduce IPV against women, even in neighborhoods with strong social ties that may facilitate awareness of the violence. In fact, perpetrators of minor IPV may enjoy some protective benefit in communities with social ties that make neighbors hesitant to intervene in what some might perceive as "private matters."

  8. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  9. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  10. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  11. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  12. Peste des Petits Ruminants risk factors and space-time clusters in Mymensingh, Bangladesh.

    PubMed

    Rony, M S; Rahman, A K M A; Alam, M M; Dhand, N; Ward, M P

    2017-12-01

    Using a hospital-based case-control study design, our aim was to identify risk factors for-and space-time clusters of-Peste des Petits Ruminants (PPR) in Mymensingh, Bangladesh. Three hundred and eighty PPR cases diagnosed between January 2005 and December 2014 at the Bangladesh Agricultural University Veterinary Teaching Hospital (BAUVTH) were selected; three controls per case from BAUVTH were then selected (n = 1,048). From records, data extracted included information on date of report, location, age, breed, sex and body weight of goats. A mixed multivariable logistic regression model was built to identify risk factors. Location was included as a random effect and season and demographic variables as fixed effects. The approximate geographic coordinates of locations were collected, and the scan statistic (Bernoulli model) was used to identify space-time clusters of PPR. Compared with goats <4 months of age, the odds of PPR were 3 (95% confidence interval [CI]: 1.95-4.66), 1.9 (CI: 1.34-2.76) and 1.8 times (95% CI: 1.19-2.58) greater in goats aged 4-6, >6-12 and >12-24 months, respectively. The occurrence of PPR was also significantly higher (odds ratio [OR] 3.2; 95% CI: 1.15-8.59) in the Jamunapari breed than Black Bengals. Significantly higher odds of PPR were observed in winter (OR 1.6; 95% CI: 1.06-2.14) and the monsoon season (OR 1.5; 95% CI: 1.04-2.11) compared with the post-monsoon season. Two significant (p < .05) space-time clusters were identified between 2 December 2006 and 6 September 2007 (two locations) and 28 November 2006 and 13 February 2007 (five locations). Peste des Petits Ruminants is endemic in Bangladesh, but also occurs as discrete outbreaks. Control efforts-such as vaccination-should focus on high-risk groups (4-24 months of age, Jamunapari breed), prior to the onset of winter and the monsoon season so as to increase immunity during high-risk periods, and focus on disease hotspots. © 2017 Blackwell Verlag GmbH.

  13. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  14. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  15. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  16. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  17. A Note on the Application of the Extended Bernoulli Equation

    DTIC Science & Technology

    1999-02-01

    as OV s ... - Vp „ _ = -±L L + VO , (2) Dt p where DIDt denotes the material derivative (discussed in following section); V is the vector...force potential; V is the vector gradient operator; s (J is the deviatoric-stress tensor arising from any type of elasto-viscoplastic constitutive...behavior; and s ^j is index notation for dsy/dxp denoting the following vector condensation of the deviatoric-stress tensor: ds ds ds

  18. A new experimental method for determining local airloads on rotor blades in forward flight

    NASA Astrophysics Data System (ADS)

    Berton, E.; Maresca, C.; Favier, D.

    This paper presents a new approach for determining local airloads on helicopter rotor blade sections in forward flight. The method is based on the momentum equation in which all the terms are expressed by means of the velocity field measured by a laser Doppler velocimeter. The relative magnitude of the different terms involved in the momentum and Bernoulli equations is estimated and the results are encouraging.

  19. A conserved quantity in thin body dynamics

    NASA Astrophysics Data System (ADS)

    Hanna, J. A.; Pendar, H.

    2016-02-01

    Thin, solid bodies with metric symmetries admit a restricted form of reparameterization invariance. Their dynamical equilibria include motions with both rigid and flowing aspects. On such configurations, a quantity is conserved along the intrinsic coordinate corresponding to the symmetry. As an example of its utility, this conserved quantity is combined with linear and angular momentum currents to construct solutions for the equilibria of a rotating, flowing string, for which it is akin to Bernoulli's constant.

  20. Dynamic response of a viscoelastic Timoshenko beam

    NASA Technical Reports Server (NTRS)

    Kalyanasundaram, S.; Allen, D. H.; Schapery, R. A.

    1987-01-01

    The analysis presented in this study deals with the vibratory response of viscoelastic Timoshenko (1955) beams under the assumption of small material loss tangents. The appropriate method of analysis employed here may be applied to more complex structures. This study compares the damping ratios obtained from the Timoshenko and Euler-Bernoulli theories for a given viscoelastic material system. From this study the effect of shear deformation and rotary inertia on damping ratios can be identified.

  1. Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loubenets, Elena R.

    We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less

  2. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  3. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  4. Uncertain dynamic analysis for rigid-flexible mechanisms with random geometry and material properties

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.

    2017-02-01

    This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.

  5. Do little interactions get lost in dark random forests?

    PubMed

    Wright, Marvin N; Ziegler, Andreas; König, Inke R

    2016-03-31

    Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.

  6. The influence of an uncertain force environment on reshaping trial-to-trial motor variability.

    PubMed

    Izawa, Jun; Yoshioka, Toshinori; Osu, Rieko

    2014-09-10

    Motor memory is updated to generate ideal movements in a novel environment. When the environment changes every trial randomly, how does the brain incorporate this uncertainty into motor memory? To investigate how the brain adapts to an uncertain environment, we considered a reach adaptation protocol where individuals practiced moving in a force field where a noise was injected. After they had adapted, we measured the trial-to-trial variability in the temporal profiles of the produced hand force. We found that the motor variability was significantly magnified by the adaptation to the random force field. Temporal profiles of the motor variance were significantly dissociable between two different types of random force fields experienced. A model-based analysis suggests that the variability is generated by noise in the gains of the internal model. It further suggests that the trial-to-trial motor variability magnified by the adaptation in a random force field is generated by the uncertainty of the internal model formed in the brain as a result of the adaptation.

  7. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  8. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  9. Exploiting Data Missingness in Bayesian Network Modeling

    NASA Astrophysics Data System (ADS)

    Rodrigues de Morais, Sérgio; Aussem, Alex

    This paper proposes a framework built on the use of Bayesian networks (BN) for representing statistical dependencies between the existing random variables and additional dummy boolean variables, which represent the presence/absence of the respective random variable value. We show how augmenting the BN with these additional variables helps pinpoint the mechanism through which missing data contributes to the classification task. The missing data mechanism is thus explicitly taken into account to predict the class variable using the data at hand. Extensive experiments on synthetic and real-world incomplete data sets reveals that the missingness information improves classification accuracy.

  10. Blade Pressure Distribution for a Moderately Loaded Propeller.

    DTIC Science & Technology

    1980-09-01

    lifting surface, ft 2 s chordwise location as fraction of chord length t time , sec t maximum thickness of blade, ft0 U free stream velocity, ft/sec (design...developed in Reference 1, it takes into account the quadratic form of the Bernoulli equation, since the pertubation velocities are some- times of the...normal derivatives at the loading and control point, respectively. It should be noted that the time factor has been eliminated from both sides of Eq. (3

  11. Simulating aggregates of bivalents in 2n = 40 mouse meiotic spermatocytes through inhomogeneous site percolation processes.

    PubMed

    Berríos, Soledad; López Fenner, Julio; Maignan, Aude

    2018-06-19

    We show that an inhomogeneous Bernoulli site percolation process running upon a fullerene's dual [Formula: see text] can be used for representing bivalents attached to the nuclear envelope in mouse Mus M. Domesticus 2n = 40 meiotic spermatocytes during pachytene. It is shown that the induced clustering generated by overlapping percolation domains correctly reproduces the probability distribution observed in the experiments (data) after fine tuning the parameters.

  12. Rasch Model Analysis with the BICAL Computer Program

    DTIC Science & Technology

    1976-09-01

    and persons which lead to measures that persist from trial to trial . The measurement model is essential in this process because it provides a framework...and his students. Section 15 two derives the estimating equations for the Bernoulli (i.e. one trial per task) form : " and then generalizes to the...Binomial form (several trials per task). Finall) goodness of fit tests are presented for assessing the adequacy of the calibration. t { ) I I I 41 CHAPTER

  13. Couple stress theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for plane curved rods based on linear couple stress theory of elasticity have been developed.2-D theory is developed from general 2-D equations of linear couple stress elasticity using a special curvilinear system of coordinates related to the middle line of the rod as well as special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and rotation along with body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby, all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of elasticity, a system of differential equations in terms of displacements and boundary conditions for Fourier coefficients have been obtained. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear couple stress theory of elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scales when taking into account couple stress and rotation effects.

  14. Bending of Euler-Bernoulli nanobeams based on the strain-driven and stress-driven nonlocal integral models: a numerical approach

    NASA Astrophysics Data System (ADS)

    Oskouie, M. Faraji; Ansari, R.; Rouhi, H.

    2018-04-01

    Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.

  15. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Nonrecurrence and Bell-like inequalities

    NASA Astrophysics Data System (ADS)

    Danforth, Douglas G.

    2017-12-01

    The general class, Λ, of Bell hidden variables is composed of two subclasses ΛR and ΛN such that ΛR⋃ΛN = Λ and ΛR∩ ΛN = {}. The class ΛN is very large and contains random variables whose domain is the continuum, the reals. There are an uncountable infinite number of reals. Every instance of a real random variable is unique. The probability of two instances being equal is zero, exactly zero. ΛN induces sample independence. All correlations are context dependent but not in the usual sense. There is no "spooky action at a distance". Random variables, belonging to ΛN, are independent from one experiment to the next. The existence of the class ΛN makes it impossible to derive any of the standard Bell inequalities used to define quantum entanglement.

  17. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  19. Benford's law and continuous dependent random variables

    NASA Astrophysics Data System (ADS)

    Becker, Thealexa; Burt, David; Corcoran, Taylor C.; Greaves-Tunnell, Alec; Iafrate, Joseph R.; Jing, Joy; Miller, Steven J.; Porfilio, Jaclyn D.; Ronan, Ryan; Samranvedhya, Jirapat; Strauch, Frederick W.; Talbut, Blaine

    2018-01-01

    Many mathematical, man-made and natural systems exhibit a leading-digit bias, where a first digit (base 10) of 1 occurs not 11% of the time, as one would expect if all digits were equally likely, but rather 30%. This phenomenon is known as Benford's Law. Analyzing which datasets adhere to Benford's Law and how quickly Benford behavior sets in are the two most important problems in the field. Most previous work studied systems of independent random variables, and relied on the independence in their analyses. Inspired by natural processes such as particle decay, we study the dependent random variables that emerge from models of decomposition of conserved quantities. We prove that in many instances the distribution of lengths of the resulting pieces converges to Benford behavior as the number of divisions grow, and give several conjectures for other fragmentation processes. The main difficulty is that the resulting random variables are dependent. We handle this by using tools from Fourier analysis and irrationality exponents to obtain quantified convergence rates as well as introducing and developing techniques to measure and control the dependencies. The construction of these tools is one of the major motivations of this work, as our approach can be applied to many other dependent systems. As an example, we show that the n ! entries in the determinant expansions of n × n matrices with entries independently drawn from nice random variables converges to Benford's Law.

  20. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  1. The living Drake equation of the Tau Zero Foundation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-03-01

    The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.

  2. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  3. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  4. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  5. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  6. Correlated resistive/capacitive state variability in solid TiO2 based memory devices

    NASA Astrophysics Data System (ADS)

    Li, Qingjiang; Salaoru, Iulia; Khiat, Ali; Xu, Hui; Prodromakis, Themistoklis

    2017-05-01

    In this work, we experimentally demonstrated the correlated resistive/capacitive switching and state variability in practical TiO2 based memory devices. Based on filamentary functional mechanism, we argue that the impedance state variability stems from the randomly distributed defects inside the oxide bulk. Finally, our assumption was verified via a current percolation circuit model, by taking into account of random defects distribution and coexistence of memristor and memcapacitor.

  7. Algebraic Functions of H-Functions with Specific Dependency Structure.

    DTIC Science & Technology

    1984-05-01

    a study of its characteristic function. Such analysis is reproduced in books by Springer (17), Anderson (23), Feller (34,35), Mood and Graybill (52...following linearity property for expectations of jointly distributed random variables is derived. r 1 Theorem 1.1: If X and Y are real random variables...appear in American Journal of Mathematical and Management Science. 13. Mathai, A.M., and R.K. Saxena, "On linear combinations of stochastic variables

  8. Geospatial clustering in sugar-sweetened beverage consumption among Boston youth.

    PubMed

    Tamura, Kosuke; Duncan, Dustin T; Athens, Jessica K; Bragg, Marie A; Rienti, Michael; Aldstadt, Jared; Scott, Marc A; Elbel, Brian

    2017-09-01

    The objective was to detect geospatial clustering of sugar-sweetened beverage (SSB) intake in Boston adolescents (age = 16.3 ± 1.3 years [range: 13-19]; female = 56.1%; White = 10.4%, Black = 42.6%, Hispanics = 32.4%, and others = 14.6%) using spatial scan statistics. We used data on self-reported SSB intake from the 2008 Boston Youth Survey Geospatial Dataset (n = 1292). Two binary variables were created: consumption of SSB (never versus any) on (1) soda and (2) other sugary drinks (e.g., lemonade). A Bernoulli spatial scan statistic was used to identify geospatial clusters of soda and other sugary drinks in unadjusted models and models adjusted for age, gender, and race/ethnicity. There was no statistically significant clustering of soda consumption in the unadjusted model. In contrast, a cluster of non-soda SSB consumption emerged in the middle of Boston (relative risk = 1.20, p = .005), indicating that adolescents within the cluster had a 20% higher probability of reporting non-soda SSB intake than outside the cluster. The cluster was no longer significant in the adjusted model, suggesting spatial variation in non-soda SSB drink intake correlates with the geographic distribution of students by race/ethnicity, age, and gender.

  9. Lipid body accumulation alters calcium signaling dynamics in immune cells

    PubMed Central

    Greineisen, William E.; Speck, Mark; Shimoda, Lori M.N.; Sung, Carl; Phan, Nolwenn; Maaetoft-Udsen, Kristina; Stokes, Alexander J.; Turner, Helen

    2014-01-01

    Summary There is well-established variability in the numbers of lipid bodies (LB) in macrophages, eosinophils, and neutrophils. Similarly to the steatosis observed in adipocytes and hepatocytes during hyperinsulinemia and nutrient overload, immune cell LB hyper-accumulate in response to bacterial and parasitic infection and inflammatory presentations. Recently we described that hyperinsulinemia, both in vitro and in vivo, drives steatosis and phenotypic changes in primary and transformed mast cells and basophils. LB reach high numbers in these steatotic cytosols, and here we propose that they could dramatically impact the transcytoplasmic signaling pathways. We compared calcium release and influx responses at the population and single cell level in normal and steatotic model mast cells. At the population level, all aspects of FcεRI-dependent calcium mobilization, as well as activation of calcium-dependent downstream signalling targets such as NFATC1 phosphorylation are suppressed. At the single cell level, we demonstrate that LB are both sources and sinks of calcium following FcεRI cross-linking. Unbiased analysis of the impact of the presence of LB on the rate of trans-cytoplasmic calcium signals suggest that LB enrichment accelerates calcium propagation, which may reflect a Bernoulli effect. LB abundance thus impacts this fundamental signalling pathway and its downstream targets. PMID:25016314

  10. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  11. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  12. On the distribution of a product of N Gaussian random variables

    NASA Astrophysics Data System (ADS)

    Stojanac, Željka; Suess, Daniel; Kliesch, Martin

    2017-08-01

    The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.

  13. Radial profiles of velocity and pressure for condensation-induced hurricanes

    NASA Astrophysics Data System (ADS)

    Makarieva, A. M.; Gorshkov, V. G.

    2011-02-01

    The Bernoulli integral in the form of an algebraic equation is obtained for the hurricane air flow as the sum of the kinetic energy of wind and the condensational potential energy. With an account for the eye rotation energy and the decrease of angular momentum towards the hurricane center it is shown that the theoretical profiles of pressure and velocity agree well with observations for intense hurricanes. The previous order of magnitude estimates obtained in pole approximation are confirmed.

  14. Theoretical Limits of Damping Attainable by Smart Beams with Rate Feedback

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1997-01-01

    Using a generally accepted model we present a comprehensive analysis (within the page limitation) of an Euler- Bernoulli beam with PZT sensor-actuator and pure rate feedback. The emphasis is on the root locus - the dependence of the attainable damping on the feedback gain. There is a critical value of the gain beyond which the damping decreases to zero. We construct the time-domain response using semigroup theory, and show that the eigenfunctions form a Riesz basis, leading to a 'modal' expansion.

  15. A Nonlinear Finite Element Framework for Viscoelastic Beams Based on the High-Order Reddy Beam Theory

    DTIC Science & Technology

    2012-06-09

    employed theories are the Euler-Bernoulli beam theory (EBT) and the Timoshenko beam theory ( TBT ). The major deficiency associated with the EBT is failure to...account for defor- mations associated with shearing. The TBT relaxes the normality assumption of the EBT and admits a constant state of shear strain...on a given cross-section. As a result, the TBT necessitates the use of shear correction coefficients in order to accurately predict transverse

  16. Random parameter models for accident prediction on two-lane undivided highways in India.

    PubMed

    Dinu, R R; Veeraragavan, A

    2011-02-01

    Generalized linear modeling (GLM), with the assumption of Poisson or negative binomial error structure, has been widely employed in road accident modeling. A number of explanatory variables related to traffic, road geometry, and environment that contribute to accident occurrence have been identified and accident prediction models have been proposed. The accident prediction models reported in literature largely employ the fixed parameter modeling approach, where the magnitude of influence of an explanatory variable is considered to be fixed for any observation in the population. Similar models have been proposed for Indian highways too, which include additional variables representing traffic composition. The mixed traffic on Indian highways comes with a lot of variability within, ranging from difference in vehicle types to variability in driver behavior. This could result in variability in the effect of explanatory variables on accidents across locations. Random parameter models, which can capture some of such variability, are expected to be more appropriate for the Indian situation. The present study is an attempt to employ random parameter modeling for accident prediction on two-lane undivided rural highways in India. Three years of accident history, from nearly 200 km of highway segments, is used to calibrate and validate the models. The results of the analysis suggest that the model coefficients for traffic volume, proportion of cars, motorized two-wheelers and trucks in traffic, and driveway density and horizontal and vertical curvatures are randomly distributed across locations. The paper is concluded with a discussion on modeling results and the limitations of the present study. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  18. Random dopant fluctuations and statistical variability in n-channel junctionless FETs

    NASA Astrophysics Data System (ADS)

    Akhavan, N. D.; Umana-Membreno, G. A.; Gu, R.; Antoszewski, J.; Faraone, L.

    2018-01-01

    The influence of random dopant fluctuations on the statistical variability of the electrical characteristics of n-channel silicon junctionless nanowire transistor (JNT) has been studied using three dimensional quantum simulations based on the non-equilibrium Green’s function (NEGF) formalism. Average randomly distributed body doping densities of 2 × 1019, 6 × 1019 and 1 × 1020 cm-3 have been considered employing an atomistic model for JNTs with gate lengths of 5, 10 and 15 nm. We demonstrate that by properly adjusting the doping density in the JNT, a near ideal statistical variability and electrical performance can be achieved, which can pave the way for the continuation of scaling in silicon CMOS technology.

  19. Measures of Residual Risk with Connections to Regression, Risk Tracking, Surrogate Models, and Ambiguity

    DTIC Science & Technology

    2015-04-15

    manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original... manage , predict and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved situation faced

  20. Sums and Products of Jointly Distributed Random Variables: A Simplified Approach

    ERIC Educational Resources Information Center

    Stein, Sheldon H.

    2005-01-01

    Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…

  1. A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.

    ERIC Educational Resources Information Center

    Kraemer, Helena Chmura; Thiemann, Sue

    1989-01-01

    Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…

  2. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  3. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  4. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  5. Random Effects: Variance Is the Spice of Life.

    PubMed

    Jupiter, Daniel C

    Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    PubMed

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  7. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  8. Thermophysical Fluid Dynamics: the Key to the Structures of Fluid Objects

    NASA Astrophysics Data System (ADS)

    Houben, H.

    2013-12-01

    It has become customary to model the hydrodynamics of fluid planets like Jupiter and Saturn by spinning up general circulation models until they reach a statistical steady state. This approach is physically sound, based on the thermodynamic expectation that the system will eventually achieve a state of maximum entropy, but the models have not been specifically designed for this purpose. Over the course of long integrations, numerical artifacts can drive the system to a state that does not correspond to the physically realistic end state. A different formulation of the governing equations promises better results. The equations of motion are recast as scalar conservation laws in which the diabatic and irreversible terms (both entropy-changing) are clearly identified. The balance between these terms defines the steady state of the system analytically, without the need for any temporal integrations. The conservation of mass in this system is trivial. Conservation of angular momentum replaces the zonal momentum equation and determines the zonal wind from a balance between the tidal torque and frictional dissipation. The principle of wave-mean flow non-interaction is preserved. Bernoulli's Theorem replaces the energy equation. The potential temperature structure is determined by the balance between work done against friction and heat transfer by convection and radiation. An equation of state and the traditional momentum equations in the meridional plane are sufficient to complete the model. Based on the assumption that the final state vertical and meridional winds are small compared to the zonal wind (in any case they are impossible to predict ab initio as they are driven by wave flux convergences), these last equations determine the pressure and density (and hence gravity) fields of the basic state. The thermal wind relation (in its most general form with the axial derivative of the zonal wind balancing the baroclinicity) is preserved. The model is not hydrostatic (in the sense used in planetary modeling) and the zonal wind is not constant on cylinders. Rather, the zonal wind falls off more rapidly with depth --- at least as fast as r3. A similar reformulation of the equations of magnetohydrodynamics is possible. It is found that wave-mean flow non-interaction extends to Alfven waves. Bernoulli's Theorem is augmented by the Poynting Theorem. The components of the traditional dynamo equation can be written as conservation laws. Only a single element of the alpha tensor contributes to the generation of axisymmetric magnetic fields and the mean meridional circulation provides a significant feedback, quenching the omega effect and limiting the amplitudes of non-axisymmetric fields. Thus analytic models are available for all the state variables of Jupiter and Saturn. The unknown independent variables are terms in the equation of state, the eddy viscosity and heat transport coefficients, the magnetic resistivity, and the strength of the tidal torques (which are dependent on the vertical structure of the planet's troposphere). By making new measurements of the atmospheric structure and higher order gravitational moments of Jupiter, JUNO has the potential to constrain these unknowns and contribute greatly to our understanding of the interior of that planet.

  9. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  10. Comparison of Random Forest and Parametric Imputation Models for Imputing Missing Data Using MICE: A CALIBER Study

    PubMed Central

    Shah, Anoop D.; Bartlett, Jonathan W.; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-01-01

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The “true” imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001–2010) with complete data on all covariates. Variables were artificially made “missing at random,” and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data. PMID:24589914

  11. Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    PubMed

    Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry

    2014-03-15

    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.

  12. "Congratulations, you have been randomized into the control group!(?)": issues to consider when recruiting schools for matched-pair randomized control trials of prevention programs.

    PubMed

    Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa

    2008-03-01

    Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.

  13. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  14. Random vectors and spatial analysis by geostatistics for geotechnical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, D.S.

    1987-08-01

    Geostatistics is extended to the spatial analysis of vector variables by defining the estimation variance and vector variogram in terms of the magnitude of difference vectors. Many random variables in geotechnology are in vectorial terms rather than scalars, and its structural analysis requires those sample variable interpolations to construct and characterize structural models. A better local estimator will result in greater quality of input models; geostatistics can provide such estimators; kriging estimators. The efficiency of geostatistics for vector variables is demonstrated in a case study of rock joint orientations in geological formations. The positive cross-validation encourages application of geostatistics tomore » spatial analysis of random vectors in geoscience as well as various geotechnical fields including optimum site characterization, rock mechanics for mining and civil structures, cavability analysis of block cavings, petroleum engineering, and hydrologic and hydraulic modelings.« less

  15. A 1.26 μW Cytomimetic IC Emulating Complex Nonlinear Mammalian Cell Cycle Dynamics: Synthesis, Simulation and Proof-of-Concept Measured Results.

    PubMed

    Houssein, Alexandros; Papadimitriou, Konstantinos I; Drakakis, Emmanuel M

    2015-08-01

    Cytomimetic circuits represent a novel, ultra low-power, continuous-time, continuous-value class of circuits, capable of mapping on silicon cellular and molecular dynamics modelled by means of nonlinear ordinary differential equations (ODEs). Such monolithic circuits are in principle able to emulate on chip, single or multiple cell operations in a highly parallel fashion. Cytomimetic topologies can be synthesized by adopting the Nonlinear Bernoulli Cell Formalism (NBCF), a mathematical framework that exploits the striking similarities between the equations describing weakly-inverted Metal-Oxide Semiconductor (MOS) devices and coupled nonlinear ODEs, typically appearing in models of naturally encountered biochemical systems. The NBCF maps biological state variables onto strictly positive subthreshold MOS circuit currents. This paper presents the synthesis, the simulation and proof-of-concept chip results corresponding to the emulation of a complex cellular network mechanism, the skeleton model for the network of Cyclin-dependent Kinases (CdKs) driving the mammalian cell cycle. This five variable nonlinear biological model, when appropriate model parameter values are assigned, can exhibit multiple oscillatory behaviors, varying from simple periodic oscillations, to complex oscillations such as quasi-periodicity and chaos. The validity of our approach is verified by simulated results with realistic process parameters from the commercially available AMS 0.35 μm technology and by chip measurements. The fabricated chip occupies an area of 2.27 mm2 and consumes a power of 1.26 μW from a power supply of 3 V. The presented cytomimetic topology follows closely the behavior of its biological counterpart, exhibiting similar time-dependent solutions of the Cdk complexes, the transcription factors and the proteins.

  16. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  17. Randomized trial of intermittent or continuous amnioinfusion for variable decelerations.

    PubMed

    Rinehart, B K; Terrone, D A; Barrow, J H; Isler, C M; Barrilleaux, P S; Roberts, W E

    2000-10-01

    To determine whether continuous or intermittent bolus amnioinfusion is more effective in relieving variable decelerations. Patients with repetitive variable decelerations were randomized to an intermittent bolus or continuous amnioinfusion. The intermittent bolus infusion group received boluses of 500 mL of normal saline, each over 30 minutes, with boluses repeated if variable decelerations recurred. The continuous infusion group received a bolus infusion of 500 mL of normal saline over 30 minutes and then 3 mL per minute until delivery occurred. The ability of the amnioinfusion to abolish variable decelerations was analyzed, as were maternal demographic and pregnancy outcome variables. Power analysis indicated that 64 patients would be required. Thirty-five patients were randomized to intermittent infusion and 30 to continuous infusion. There were no differences between groups in terms of maternal demographics, gestational age, delivery mode, neonatal outcome, median time to resolution of variable decelerations, or the number of times variable decelerations recurred. The median volume infused in the intermittent infusion group (500 mL) was significantly less than that in the continuous infusion group (905 mL, P =.003). Intermittent bolus amnioinfusion is as effective as continuous infusion in relieving variable decelerations in labor. Further investigation is necessary to determine whether either of these techniques is associated with increased occurrence of rare complications such as cord prolapse or uterine rupture.

  18. Nonlocal theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-09-01

    New models for plane curved rods based on linear nonlocal theory of elasticity have been developed. The 2-D theory is developed from general 2-D equations of linear nonlocal elasticity using a special curvilinear system of coordinates related to the middle line of the rod along with special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate. Thereby, all equations of elasticity including nonlocal constitutive relations have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of local elasticity, a system of differential equations in terms of displacements for Fourier coefficients has been obtained. First and second order approximations have been considered in detail. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear nonlocal theory of elasticity which are considered in a special curvilinear system of coordinates related to the middle line of the rod. The obtained equations can be used to calculate stress-strain and to model thin walled structures in micro- and nanoscales when taking into account size dependent and nonlocal effects.

  19. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  20. Reliability analysis of structures under periodic proof tests in service

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    A reliability analysis of structures subjected to random service loads and periodic proof tests treats gust loads and maneuver loads as random processes. Crack initiation, crack propagation, and strength degradation are treated as the fatigue process. The time to fatigue crack initiation and ultimate strength are random variables. Residual strength decreases during crack propagation, so that failure rate increases with time. When a structure fails under periodic proof testing, a new structure is built and proof-tested. The probability of structural failure in service is derived from treatment of all the random variables, strength degradations, service loads, proof tests, and the renewal of failed structures. Some numerical examples are worked out.

  1. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  2. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  3. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  4. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  5. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  6. Evaluating physical habitat and water chemistry data from statewide stream monitoring programs to establish least-impacted conditions in Washington State

    USGS Publications Warehouse

    Wilmoth, Siri K.; Irvine, Kathryn M.; Larson, Chad

    2015-01-01

    Various GIS-generated land-use predictor variables, physical habitat metrics, and water chemistry variables from 75 reference streams and 351 randomly sampled sites throughout Washington State were evaluated for effectiveness at discriminating reference from random sites within level III ecoregions. A combination of multivariate clustering and ordination techniques were used. We describe average observed conditions for a subset of predictor variables as well as proposing statistical criteria for establishing reference conditions for stream habitat in Washington. Using these criteria, we determined whether any of the random sites met expectations for reference condition and whether any of the established reference sites failed to meet expectations for reference condition. Establishing these criteria will set a benchmark from which future data will be compared.

  7. Non-manipulation quantitative designs.

    PubMed

    Rumrill, Phillip D

    2004-01-01

    The article describes non-manipulation quantitative designs of two types, correlational and causal comparative studies. Both of these designs are characterized by the absence of random assignment of research participants to conditions or groups and non-manipulation of the independent variable. Without random selection or manipulation of the independent variable, no attempt is made to draw causal inferences regarding relationships between independent and dependent variables. Nonetheless, non-manipulation studies play an important role in rehabilitation research, as described in this article. Examples from the contemporary rehabilitation literature are included. Copyright 2004 IOS Press

  8. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial.

    PubMed

    Fletcher, H M; Dawkins, J; Rattray, C; Wharfe, G; Reid, M; Gordon-Strachan, G

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo.

  9. Morinda citrifolia (Noni) as an Anti-Inflammatory Treatment in Women with Primary Dysmenorrhoea: A Randomised Double-Blind Placebo-Controlled Trial

    PubMed Central

    Fletcher, H. M.; Dawkins, J.; Rattray, C.; Wharfe, G.; Reid, M.; Gordon-Strachan, G.

    2013-01-01

    Introduction. Noni (Morinda citrifolia) has been used for many years as an anti-inflammatory agent. We tested the efficacy of Noni in women with dysmenorrhea. Method. We did a prospective randomized double-blind placebo-controlled trial in 100 university students of 18 years and older over three menstrual cycles. Patients were invited to participate and randomly assigned to receive 400 mg Noni capsules or placebo. They were assessed for baseline demographic variables such as age, parity, and BMI. They were also assessed before and after treatment, for pain, menstrual blood loss, and laboratory variables: ESR, hemoglobin, and packed cell volume. Results. Of the 1027 women screened, 100 eligible women were randomized. Of the women completing the study, 42 women were randomized to Noni and 38 to placebo. There were no significant differences in any of the variables at randomization. There were also no significant differences in mean bleeding score or pain score at randomization. Both bleeding and pain scores gradually improved in both groups as the women were observed over three menstrual cycles; however, the improvement was not significantly different in the Noni group when compared to the controls. Conclusion. Noni did not show a reduction in menstrual pain or bleeding when compared to placebo. PMID:23431314

  10. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  11. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  12. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  13. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  14. Beckham as physicist?

    NASA Astrophysics Data System (ADS)

    Ireson, Gren

    2001-01-01

    It is hard to think of a medium that does not use football or soccer as a means of promotion. It is also hard to think of a student who has not heard of David Beckham. If football captures the interest of students it can be used to teach physics; in this case a Beckham free-kick can be used to introduce concepts such as drag, the Bernoulli principle, Reynolds number and the Magnus effect, by asking the simple question: How does he curve the ball so much? Much basic mechanics can also be introduced along the way.

  15. Ergodic properties of the multidimensional rayleigh gas with a semipermeable barrier

    NASA Astrophysics Data System (ADS)

    Erdős, L.; Tuyen, D. Q.

    1990-06-01

    We consider a multidimensional system consisting of a particle of mass M and radius r (molecule), surrounded by an infinite ideal gas of point particles of mass m (atoms). The molecule is confined to the unit ball and interacts with its boundary ( barrier) via elastic collision, while the atoms are not affected by the boundary. We obtain convergence to equilibrium for the molecule from almost every initial distribution on its position and velocity. Furthermore, we prove that the infinite composite system of the molecule and the atoms is Bernoulli.

  16. Nonlinear oscillations of inviscid free drops

    NASA Technical Reports Server (NTRS)

    Patzek, T. W.; Benner, R. E., Jr.; Basaran, O. A.; Scriven, L. E.

    1991-01-01

    The present analysis of free liquid drops' inviscid oscillations proceeds through solution of Bernoulli's equation to obtain the free surface shape and of Laplace's equation for the velocity potential field. Results thus obtained encompass drop-shape sequences, pressure distributions, particle paths, and the temporal evolution of kinetic and surface energies; accuracy is verified by the near-constant drop volume and total energy, as well as the diminutiveness of mass and momentum fluxes across drop surfaces. Further insight into the nature of oscillations is provided by Fourier power spectrum analyses of mode interactions and frequency shifts.

  17. Experimental Verification and Revision of the Venting Rate Model of the Hazard Assessment Computer System and the Vulnerability Model.

    DTIC Science & Technology

    1980-11-01

    discharge of a nonvolatile liquid can be ob- tained by standard Bernoulli -type relations; it is: WLo = CDA LoPL (2[PT - P-/PL + - ZLh) 1/ (1110) In all...cargo outflow momentum is low (i.e., when the net positive pressure differ- ence across the puncture is near zero). The tests showed that the water...34Benedict-Webb- Rubin Ecuation of State for Methane at Cryogenic Condi- tions," Advances -in Crvccenic ’Encineerinc., 14, po. 49-54, Plen=m Press, 1969

  18. A Method for Predicting Three-Degree-of-Freedom Store Separation Trajectories at Speeds up to the Critical Speed

    DTIC Science & Technology

    1971-07-01

    the store ]ength. If the potential is constructed on this basis and the body pressure coefficients determined from the unsteady Bernoulli equation...term has a clear momentum interpretation. The second term isSfa biovant force as will now be shown. For irrotational plane flow, we have’ L ( 1- 1 7 ) n...p m p m 1!-4. EQUATIONS FOR VORTEX STRENGTHS In writing the equations for the vortex strengths, we start first v:i.t. ecuation (11-5) for the

  19. Design Manual for Microgravity Two-Phase Flow and Heat Transfer

    DTIC Science & Technology

    1989-10-01

    simultaneous solution of two equations. One equation is a dimensionless two-.nhase momentum equation for a separated flow and the other is a dimensionless...created by the flow of the gas over a wave (the Bernoulli effect) is sufficient to lift the waves in a stratified flow to the top of the pipe. A... momentum equation to determine a dimensionless parameter related to the liquid flow rate: 14 [(Ug*Dg*)1(1J*) 2[ [ [ + - 4Y X 2 =9 k (1-16) [U *D1*] -n

  20. Structure of the oligomers obtained by enzymatic hydrolysis of the glucomannan produced by the plant Amorphophallus konjac.

    PubMed

    Cescutti, Paola; Campa, Cristiana; Delben, Franco; Rizzo, Roberto

    2002-11-29

    Dimers and trimers obtained by enzymatic hydrolysis of the glucomannan produced by the plant Amorphophallus konjac were analysed in order to obtain information on the saccharidic sequences present in the polymer. The polysaccharide was digested with cellulase and beta-mannanase and the oligomers produced were isolated by means of size-exclusion chromatography. They were structurally characterised using electrospray mass spectrometry, capillary electrophoresis, and NMR. The investigation revealed that many possible sequences were present in the polymer backbone suggesting a Bernoulli-type chain.

  1. Melde's Experiment on a Vibrating Liquid Foam Microchannel

    NASA Astrophysics Data System (ADS)

    Cohen, Alexandre; Fraysse, Nathalie; Raufaste, Christophe

    2017-12-01

    We subject a single Plateau border channel to a transverse harmonic excitation, in an experiment reminiscent of the historical one by Melde on vibrating strings, to study foam stability and wave properties. At low driving amplitudes, the liquid string exhibits regular oscillations. At large ones, a nonlinear regime appears and the acoustic radiation splits the channel into two zones of different cross section area, vibration amplitude, and phase difference with the neighboring soap films. The channel experiences an inertial dilatancy that is accounted for by a new Bernoulli-like relation.

  2. Melde's Experiment on a Vibrating Liquid Foam Microchannel.

    PubMed

    Cohen, Alexandre; Fraysse, Nathalie; Raufaste, Christophe

    2017-12-08

    We subject a single Plateau border channel to a transverse harmonic excitation, in an experiment reminiscent of the historical one by Melde on vibrating strings, to study foam stability and wave properties. At low driving amplitudes, the liquid string exhibits regular oscillations. At large ones, a nonlinear regime appears and the acoustic radiation splits the channel into two zones of different cross section area, vibration amplitude, and phase difference with the neighboring soap films. The channel experiences an inertial dilatancy that is accounted for by a new Bernoulli-like relation.

  3. Vehicle - Bridge interaction, comparison of two computing models

    NASA Astrophysics Data System (ADS)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  4. Summer Study Program in Geophysical Fluid Dynamics - The Influence of Convection on Large-Scale Circulations - 1988

    DTIC Science & Technology

    1989-07-01

    the vector of the body force." lo., ,P /’P l> 16 __ __ _ __ ___P . 19 U In the first lecture we define the buoyancy force, develop a simplified...force and l’is a unit vector along the motion vector . Integrating Bernoulli’s law over a closed loop one gets: I also [ C by integrating along the...convection. It is conveiient to write these equations as evolution equations for a atate vector U(x, z, t) where x is the horizontal coordinate vector

  5. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  6. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  7. Are glucose levels, glucose variability and autonomic control influenced by inspiratory muscle exercise in patients with type 2 diabetes? Study protocol for a randomized controlled trial.

    PubMed

    Schein, Aso; Correa, Aps; Casali, Karina Rabello; Schaan, Beatriz D

    2016-01-20

    Physical exercise reduces glucose levels and glucose variability in patients with type 2 diabetes. Acute inspiratory muscle exercise has been shown to reduce these parameters in a small group of patients with type 2 diabetes, but these results have yet to be confirmed in a well-designed study. The aim of this study is to investigate the effect of acute inspiratory muscle exercise on glucose levels, glucose variability, and cardiovascular autonomic function in patients with type 2 diabetes. This study will use a randomized clinical trial crossover design. A total of 14 subjects will be recruited and randomly allocated to two groups to perform acute inspiratory muscle loading at 2 % of maximal inspiratory pressure (PImax, placebo load) or 60 % of PImax (experimental load). Inspiratory muscle training could be a novel exercise modality to be used to decrease glucose levels and glucose variability. ClinicalTrials.gov NCT02292810 .

  8. An Integrated Probabilistic-Fuzzy Assessment of Uncertainty Associated with Human Health Risk to MSW Landfill Leachate Contamination

    NASA Astrophysics Data System (ADS)

    Mishra, H.; Karmakar, S.; Kumar, R.

    2016-12-01

    Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.

  9. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  10. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  11. Observational studies of patients in the emergency department: a comparison of 4 sampling methods.

    PubMed

    Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R

    2012-08-01

    We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.

  12. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  13. Sport and Recreation Are Associated With Happiness Across Countries.

    PubMed

    Balish, Shea M; Conacher, Dan; Dithurbide, Lori

    2016-12-01

    Preliminary findings suggest sport participation is positively associated with happiness. However, it is unknown if this association is universal and how sport compares to other leisure activities in terms of an association with happiness. This study had 3 objectives: (a) to test if sport membership is associated with happiness, (b) to test if this relationship is universal, and (c) to compare sport membership to other leisure activities. Hierarchal Bernoulli modeling was used to analyze the 6th wave (2014) of the World Values Survey (n Ss  = 67,736, n countries  = 48). The critical variables included measures of membership in different leisure activities (e.g., sport membership) and self-reported happiness. Even when controlling for known covariates such as perceived health, those who report sport/recreation membership are more likely to report being happy compared with non-sport members (OR = 1.38; 95% CI [1.24, 1.53]). Being a member of a sport organization had a greater association with happiness than did being a member of other leisure activities. Follow-up analyses suggested that this association is nearly universal. This study offers initial evidence that sport membership elicits happiness across many different societies. Although the causal direction remains unclear, this study establishes a positive association between happiness and sport membership. Future research should target the mechanism(s) of this effect, which we hypothesize are meaningful social relations.

  14. Nonlocal Reformulations of Water and Internal Waves and Asymptotic Reductions

    NASA Astrophysics Data System (ADS)

    Ablowitz, Mark J.

    2009-09-01

    Nonlocal reformulations of the classical equations of water waves and two ideal fluids separated by a free interface, bounded above by either a rigid lid or a free surface, are obtained. The kinematic equations may be written in terms of integral equations with a free parameter. By expressing the pressure, or Bernoulli, equation in terms of the surface/interface variables, a closed system is obtained. An advantage of this formulation, referred to as the nonlocal spectral (NSP) formulation, is that the vertical component is eliminated, thus reducing the dimensionality and fixing the domain in which the equations are posed. The NSP equations and the Dirichlet-Neumann operators associated with the water wave or two-fluid equations can be related to each other and the Dirichlet-Neumann series can be obtained from the NSP equations. Important asymptotic reductions obtained from the two-fluid nonlocal system include the generalizations of the Benney-Luke and Kadomtsev-Petviashvili (KP) equations, referred to as intermediate-long wave (ILW) generalizations. These 2+1 dimensional equations possess lump type solutions. In the water wave problem high-order asymptotic series are obtained for two and three dimensional gravity-capillary solitary waves. In two dimensions, the first term in the asymptotic series is the well-known hyperbolic secant squared solution of the KdV equation; in three dimensions, the first term is the rational lump solution of the KP equation.

  15. Can a Wind Model Mimic a Convection-Dominated Accretion Flow Model?

    NASA Astrophysics Data System (ADS)

    Chang, Heon-Young

    2001-06-01

    In this paper we investigate the properties of advection-dominated accretion flows(ADAFs) in case that outflows carry away infalling matter with its angular momentum and energy. Positive Bernoulli numbers in ADAFs allow a fraction of the gas to be ex-pelled in a form of outflows. The ADAFs are also unstable to convection. We present self-similar solutions for advection-dominated accretion flows in the presence of out-flows from the accretion flows (ADIOS). The axisymmetric flow is treated in variables integrated over polar sections and the effects of outflows on the accretion rlow are parameterized for possible configurations compatible with the one dimensional self-similar ADAF solution. We explicitly derive self-similar solutions of ADAFs in the presence of outflows and show that the strong outflows in the accretion flows result in a flatter density profile, which is similar to that of the convection-dominated accretion flows (CDAFs) in which convection transports the a! ngular momentum inward and the energy outward. These two different versions of the ADAF model should show similar behaviors in X-ray spectrum to some extent. Even though the two models may show similar behaviors, they should be distinguishable due to different physical properties. We suggest that for a central object of which mass is known these two different accretion flows should have different X-ray flux value due to deficient matter in the wind model.

  16. Structure and properties of ZnSxSe1-x thin films deposited by thermal evaporation of ZnS and ZnSe powder mixtures

    NASA Astrophysics Data System (ADS)

    Valeev, R. G.; Romanov, E. A.; Vorobiev, V. L.; Mukhgalin, V. V.; Kriventsov, V. V.; Chukavin, A. I.; Robouch, B. V.

    2015-02-01

    Interest to ZnSxSe1-x alloys is due to their band-gap tunability varying S and Se content. Films of ZnSxSe1-x were grown evaporating ZnS and ZnSe powder mixtures onto SiO2, NaCl, Si and ITO substrates using an original low-cost method. X-ray diffraction patterns and Raman spectroscopy, show that the lattice structure of these films is cubic ZnSe-like, as S atoms replace Se and film compositions have their initial S/Se ratio. Optical absorption spectra show that band gap values increase from 2.25 to 3 eV as x increases, in agreement with the literature. Because S atomic radii are smaller than Se, EXAFS spectra confirm that bond distances and Se coordination numbers decrease as the Se content decreases. The strong deviation from linearity of ZnSe coordination numbers in the ZnSxSe1-x indicate that within this ordered crystal structure strong site occupation preferences occur in the distribution of Se and S ions. The behavior is quantitatively confirmed by the strong deviation from the random Bernoulli distribution of the three sight occupation preference coefficients of the strained tetrahedron model. Actually, the ternary ZnSxSe1-x system is a bi-binary (ZnS+ZnSe) alloy with evanescent formation of ternary configurations throughout the x-range.

  17. Variable versus conventional lung protective mechanical ventilation during open abdominal surgery: study protocol for a randomized controlled trial.

    PubMed

    Spieth, Peter M; Güldner, Andreas; Uhlig, Christopher; Bluth, Thomas; Kiss, Thomas; Schultz, Marcus J; Pelosi, Paolo; Koch, Thea; Gama de Abreu, Marcelo

    2014-05-02

    General anesthesia usually requires mechanical ventilation, which is traditionally accomplished with constant tidal volumes in volume- or pressure-controlled modes. Experimental studies suggest that the use of variable tidal volumes (variable ventilation) recruits lung tissue, improves pulmonary function and reduces systemic inflammatory response. However, it is currently not known whether patients undergoing open abdominal surgery might benefit from intraoperative variable ventilation. The PROtective VARiable ventilation trial ('PROVAR') is a single center, randomized controlled trial enrolling 50 patients who are planning for open abdominal surgery expected to last longer than 3 hours. PROVAR compares conventional (non-variable) lung protective ventilation (CV) with variable lung protective ventilation (VV) regarding pulmonary function and inflammatory response. The primary endpoint of the study is the forced vital capacity on the first postoperative day. Secondary endpoints include further lung function tests, plasma cytokine levels, spatial distribution of ventilation assessed by means of electrical impedance tomography and postoperative pulmonary complications. We hypothesize that VV improves lung function and reduces systemic inflammatory response compared to CV in patients receiving mechanical ventilation during general anesthesia for open abdominal surgery longer than 3 hours. PROVAR is the first randomized controlled trial aiming at intra- and postoperative effects of VV on lung function. This study may help to define the role of VV during general anesthesia requiring mechanical ventilation. Clinicaltrials.gov NCT01683578 (registered on September 3 3012).

  18. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  19. A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems.

    PubMed

    Thoemmes, Felix; Rose, Norman

    2014-01-01

    The treatment of missing data in the social sciences has changed tremendously during the last decade. Modern missing data techniques such as multiple imputation and full-information maximum likelihood are used much more frequently. These methods assume that data are missing at random. One very common approach to increase the likelihood that missing at random is achieved consists of including many covariates as so-called auxiliary variables. These variables are either included based on data considerations or in an inclusive fashion; that is, taking all available auxiliary variables. In this article, we point out that there are some instances in which auxiliary variables exhibit the surprising property of increasing bias in missing data problems. In a series of focused simulation studies, we highlight some situations in which this type of biasing behavior can occur. We briefly discuss possible ways how one can avoid selecting bias-inducing covariates as auxiliary variables.

  20. Detecting Random, Partially Random, and Nonrandom Minnesota Multiphasic Personality Inventory-2 Protocols

    ERIC Educational Resources Information Center

    Pinsoneault, Terry B.

    2007-01-01

    The ability of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher et al., 2001) validity scales to detect random, partially random, and nonrandom MMPI-2 protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and F-sub(b) - F…

  1. Uncertainty in Random Forests: What does it mean in a spatial context?

    NASA Astrophysics Data System (ADS)

    Klump, Jens; Fouedjio, Francky

    2017-04-01

    Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.

  2. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  3. Performance of DS/SSMA (Direct-Sequence Spread-Spectrum Multiple-Access) Communications in Impulsive Channels.

    DTIC Science & Technology

    1986-11-01

    mother and my brother. Their support and encouragement made this research exciting and enjoyable. I am grateful to my advisor, Professor H. Vincent Poor...the model. The m! M A variance of a random variable with density given by (A. 1) is a2 KmC 2 2A(I+l’)• (A.2) With the variance of the random variable

  4. Impinging laminar jets at moderate Reynolds numbers and separation distances.

    PubMed

    Bergthorson, Jeffrey M; Sone, Kazuo; Mattner, Trent W; Dimotakis, Paul E; Goodwin, David G; Meiron, Dan I

    2005-12-01

    An experimental and numerical study of impinging, incompressible, axisymmetric, laminar jets is described, where the jet axis of symmetry is aligned normal to the wall. Particle streak velocimetry (PSV) is used to measure axial velocities along the centerline of the flow field. The jet-nozzle pressure drop is measured simultaneously and determines the Bernoulli velocity. The flow field is simulated numerically by an axisymmetric Navier-Stokes spectral-element code, an axisymmetric potential-flow model, and an axisymmetric one-dimensional stream-function approximation. The axisymmetric viscous and potential-flow simulations include the nozzle in the solution domain, allowing nozzle-wall proximity effects to be investigated. Scaling the centerline axial velocity by the Bernoulli velocity collapses the experimental velocity profiles onto a single curve that is independent of the nozzle-to-plate separation distance. Axisymmetric direct numerical simulations yield good agreement with experiment and confirm the velocity profile scaling. Potential-flow simulations reproduce the collapse of the data; however, viscous effects result in disagreement with experiment. Axisymmetric one-dimensional stream-function simulations can predict the flow in the stagnation region if the boundary conditions are correctly specified. The scaled axial velocity profiles are well characterized by an error function with one Reynolds-number-dependent parameter. Rescaling the wall-normal distance by the boundary-layer displacement-thickness-corrected diameter yields a collapse of the data onto a single curve that is independent of the Reynolds number. These scalings allow the specification of an analytical expression for the velocity profile of an impinging laminar jet over the Reynolds number range investigated of .

  5. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    PubMed

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  6. Damping of structural vibrations in beams and elliptical plates using the acoustic black hole effect

    NASA Astrophysics Data System (ADS)

    Georgiev, V. B.; Cuenca, J.; Gautier, F.; Simon, L.; Krylov, V. V.

    2011-05-01

    Flexural waves in beams and plates slow down if their thickness decreases. Such property was used in the past for establishing the theory of acoustic black holes (ABH). The aim of the present paper is to establish reliable numerical and experimental approaches for designing, modelling and manufacturing an effective passive vibration damper using the ABH effect. The effectiveness of such vibration absorbers increases with frequency. Initially, the dynamic behaviour of an Euler-Bernoulli beam is expressed using the Impedance Method, which in turn leads to a Riccati equation for the beam impedance. This equation is numerically integrated using an adaptive Runge-Kutta-Fehlberg method, yielding the frequency- and spatially-dependent impedance matrix of the beam, from which the reflection matrix is obtained. Moreover, the mathematical model can be extended to incorporate an absorbing film that assists for reducing reflected waves from the truncated edge. Therefore, the influence of the geometrical and material characteristics of the absorbing film is then studied and an optimal configuration of these parameters is proposed. An experiment consisting of an elliptical plate with a pit of power-law profile placed in one of its foci is presented. The elliptical shape of the plate induces a complete focalisation of the waves towards ABH in case they are generated in the other focus. Consequently, the derived 1-D method for an Euler-Bernoulli beam can be used as a phenomenological model assisting for better understanding the complex processes in 2-D elliptical structure. Finally, both, numerical simulations and experimental measurements show significant reduction of vibration levels.

  7. A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    1997-01-01

    Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.

  8. On Probability Domains IV

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  9. Tests of Hypotheses Arising In the Correlated Random Coefficient Model*

    PubMed Central

    Heckman, James J.; Schmierer, Daniel

    2010-01-01

    This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. PMID:21170148

  10. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  11. Assessing differences in groups randomized by recruitment chain in a respondent-driven sample of Seattle-area injection drug users.

    PubMed

    Burt, Richard D; Thiede, Hanne

    2014-11-01

    Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  13. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  14. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  15. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part I: Fundamentals

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2016-12-01

    Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.

  16. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  17. Probabilistic evaluation of fuselage-type composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.

  18. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  19. Financial Management of a Large Multi-site Randomized Clinical Trial

    PubMed Central

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  20. Financial management of a large multisite randomized clinical trial.

    PubMed

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

Top